In 2023, I had had enough of hearing tech bros blaming unconscious bias for all the ways in which AI was weaponised against women. Decided to demonstrate intent, I wrote Techno-Patriarchy: How AI is Misogyny’s New Clothes, originally published in The Mint.
In the article, I detailed 12 ways this technology is used against women, from reinforcing stereotypes to pregnancy surveillance. One shocked me to my core: Non-consensual sexual synthetic imagery (aka “deepfake porn”).
It was completely horrifying, dehumanizing, degrading, violating to just see yourself being misrepresented and being misappropriated in that way.
It robs you of opportunities, and it robs you of your career, and your hopes and your dreams.
Noelle Martin, “deepfake porn” victim, award-winning activist, and law reform campaigner.
So I continued to write about the dire consequences of this technology for victims and the legal vacuum, as well as denounced the powerful ecosystem (tech, payment processors, marketplaces) that fostered and profited from them.
I also made a point to bring awareness about how this technology is harming women and girls in spaces where the topic of “deepfakes” was explored broadly. I organised events, appeared on podcasts, and participated in panels, such as “The Rise of Deepfake AI” at the University of Oxford; all opportunities were fair game to bring “deepfake porn” to the forefront.
This week, I had 30 minutes to convince over 80 women in tech – and allies – to become advocates against non-consensual sexual synthetic imagery. The feedback I received from the keynote was very positive, so I’m sharing my talking points with you below.
I hope that by the end of the article, (a) you are convinced that we need to act now, and (b) you have decided how you will help to advocate against this pandemic.
The Digital Feminist is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
The State of Play
All that’s wrong with using the term “deepfake porn”
I had an aha moment when I realised the disservice the term “deepfake porn” was doing to addressing this issue.
“Deepfake” honours the name of the Reddit user who shared on the platform the first synthetic intimate media of actresses. When paired with the label “porn”, it may wrongly convey the idea that it’s consensual. Overall, the term lacks gravitas, disregarding harms.
From a legal perspective, the use of the term “deepfake” may also hinder the pursuit of justice. There have been cases where filing a lawsuit using the term deepfakes when referring to a “cheapfake” — which consists of a fake piece of media created with conventional methods of doctoring images rather than AI — has blocked prosecution.
In 2021, van Wynsberghe proposed defining sustainable artificial intelligence (AI) as “a movement to foster change in the entire lifecycle of AI products (i.e., idea generation, training, re-tuning, implementation, governance) towards greater ecological integrity and social justice”. The concept comprised two key contributions: AI for sustainability and the sustainability of AI.
At the time, a growing effort was already underway exploring how AI tools could help address climate change challenges (AI for sustainability). However, studies have already shown that developing large Natural Language Processing (NLP) AI models results in significant energy consumption and carbon emissions, often caused by using non-renewable energy. van Wynsberghe posited the need to focus on the sustainability of AI.
Four years later, the conversation about making AI sustainable has evolved considerably with the arrival of generative AI models. These models have popularised and democratised the use of artificial intelligence, especially as a productivity tool for generating content.
Another factor that has exponentially increased the resources dedicated to AI is the contested hypothesis that developing AI models with increasingly large datasets and algorithmic complexity will ultimately lead to Artificial General Intelligence (AGI) — a type of AI system that would match or surpass human cognitive capabilities.
Powerful businesses, governments, and academia consider AGI a competitive advantage. Tech leaders such as Eric Schmidt (former Google CEO) and Sam Altman (OpenAI CEO) have disregarded concerns about AI’s sustainability, as AGI will supposedly solve them in the future.
In this context, what do current trends reveal about the sustainability of AI?
Challenges
Typically, artificial intelligence models are developed and run on the cloud, which is powered by data centres. As a result, their construction has increased significantly over the past few years. McKinsey estimates that global demand for data centre capacity could rise between 19% and 22% annually from 2023 to 2030.
More than twenty years ago, I joined a medium size software company focused on scientific modelling as a trainer. I knew the company and some of their products very well. I had been their customer.
First, during my PhD in computational chemistry, then as an EU post-doctoral researcher coding FORTRAN subroutines to simulate the behaviour of materials, and as a modelling engineer working for a large chemical company.
As I started my job as a materials trainer, I had to learn about other software applications that I hadn’t used previously or was less familiar with. One of those was related to what we called at the time “statistics” to predict the properties of new materials.
Some of those “statistical methods” were neural networks and genetic algorithms, part of the field of artificial intelligence. But I was not keen on developing the material for that course. It felt like a waste of time for several reasons.
First, whilst those methods were already popular among life science researchers, they were not very helpful to materials modellers — my customers. Why? Because large, good datasets were scarce for materials.
Point in case, I still remember one specific customer excited about using the algorithms to develop new materials in their organisation. With a sinking feeling from similar conversations, I asked him, “How many data points do you have?”. He said, “I think I have 7 or 10 in a spreadsheet.” Unfortunately, I had to inform him that it was not nearly enough.
Second, the course was half a day, which was not practical to be delivered in person, the way all our workshops had been offered for years. Our experience told us that in 2005, nobody would fly to Paris, Cambridge, Boston, or San Diego for a 4-hour training event on “statistics”.
The solution? It was decided that this course would be the first to be delivered online via a “WebEx”, the great-grandparent of Zoom, Teams, and Google Meet. That was not cool at all.
At the time, we had little faith in online education for three reasons.
Running the webinars was very complex; they took ages to set up and schedule, and there were always connection glitches.
There were no “best practices” to deliver engaging online training yet, as a result, we trainers felt as if we were cheating on our job to teach our clients.
We believed that scientific and technical content was “unteachable” online.
After such a less-than-amazing start at teaching artificial intelligence online, you’d have thought I was done.
I thought so, too. But I’ve changed my mind. It hasn’t happened overnight, though.
It has taken two decades of experience teaching, using, and supporting AI tools in my corporate job, 10+ years as a DEI trailblazer, and my activism for sustainable AI for the last four years to realise that if we want systemic equality, it’s paramount we bridge the gender gap in AI adoption.
And it has also helped that I now have 20 years of experience delivering engaging online keynotes, courses, and masterclasses.
After training, I moved to the Contract Research department. There, I had the opportunity to design and deliver projects that used AI algorithms to get insights into new materials and their properties.
Later on, I became Head of Training and Contract Research and afterwards, I moved to supporting customers using our software applications for both materials and life sciences research.
Whilst there were exciting developments in those areas, most of our AI algorithms didn’t get much love from our developers or customers. After all, they hadn’t substantially improved for ages.
Then, all changed a few years ago.
In life science, AI algorithms made it possible to predict protein structure, which earned their creators the Nobel Prize. Those models have been used in pharmaceuticals and environmental technology research and were available to our customers.
We also developed applications that used AI algorithms to help accelerate drug discovery. It was hearing from clients working on cancer treatments how AI has positively broadened the kind of drugs they were considering that changed me from AI-neutral to AI-positive.
In materials science, machine learning forcefiels are also bridging the gap between quantum and classical simulation, making it possible to simultaneously model chemical reactions (quantum) in relatively large systems (classical).
In summary, my corporate job taught me that scientific research can benefit massively from the development of AI tools beyond ChatGPT.
As a DEI Trailblazer
Tired of tech applications that made users vulnerable and denied their diversity of experiences, in 2019, I launched the Ethics and Inclusion Framework.
The idea was simple — a free tool for tech developers to help them identify, prevent, mitigate, and account for the actual and potential adverse impact of the solution they develop. The approach is general so that it can be used for any software applications, including AI tools.
It was running a workshop on the framework that I met Tania Duarte, the founder of We and AI, an NGO working to encourage, enable, and empower critical thinking about AI.
I joined them in 2020 and it has been a joy to contribute to initiatives such as
The Race and AI Toolkit, designed to raise awareness of how AI algorithms encode and amplify the racial biases in our society.
Better Images of AI, a thought-provoking library of free images that more realistically portray AI and the people behind it, highlighting its strengths, weaknesses, context, and applications.
Living with AI, the e-learning course of the Scottish AI Alliance.
Additionally, as a founder of the gender employee community at my corporate job a decade ago, I’ve chaired multiple insightful meetings where we’ve discussed the impact of AI algorithms on diversity, equity, and inclusion.
I was on a mission to make others aware, too. I still remember my keynote at the Dassault Systèmes Sustainability Townhall in 2021, when I shared with my co-workers the urgency to think about the materiality of AI — you can watch here a shorter version I delivered at the WomenTech Conference in 2022.
I’ve also written about how the Global North exploits the Global South’s mineral resources to power AI, as well as how tech companies and governments disregard the energy and water consumption from running generative AI tools.
Lately, I’ve looked into data centres — which are vital to cloud services and hence to the development and deployment of AI. Given that McKinsey forecasts that they’ll triple in number by 2030, it’s paramount that we balance innovation and environmental responsibility.
AI and Women
As 50% of the population on the planet, women have been affected by AI developments, but typically not as the ones profiting from it, but instead bearing the brunt of it.
Women Leading AI
Unfortunately, it often appears that the only contribution from women to technology was made by Ada Lovelace, in the 19th century. Artificial intelligence is no exception. The contributions of women to AI have been regularly downplayed.
The article prompted criticism right away and “counter-lists” of women who have been pivotal in AI development and uncovering its harms. Still, women are not seen as “AI visionaries”.
And it’s not only society that disregards women’s expertise on AI — women themselves do that.
In 2023, I was collaborating with an NGO that focuses on increasing the number of women in leadership positions in fintech. They asked me to chair a panel at their annual conference and gave me freedom to pick the topic. I titled the panel “The role of boards driving AI adoption.”
In alignment with the mission of the NGO, we decided that we’d have one male and two females as panelists.
Finding a great male expert was fast. Finding the two female AI experts was long and excruciating.
And not because of the lack of talent. It was a lack of “enoughness.”
For three weeks, I met women who had solid experience working in teams developing and implementing strategies for AI tools. Still, they didn’t feel they were “expert enough” to be in the panel.
I finally got two smashing female AI experts but the search opened my mind to the need to get more women on boards to learn about AI tools as well as their impact on strategy and governance.
That was the rationale behind launching the Strategic AI Leadership Program, a bespoke course on AI Competence for C-Suite and Boards. The feedback was excellent and it filled me with pride to empower women in top leadership positions to have discussions about responsible and sustainable AI.
Syncophant chatbots can hide the fact that at its core, AI is a tool that automates and scales the past.
As such, it’s been consistently weaponised as a misogyny tool and its harms disregarded as unconscious bias and blamed on the lack of diversity of datasets.
And I’m not talking about “old” artificial intelligence, only. Generative AI is massively contributing to reinforcing harmful stereotypes and is being weaponised against women and underrepresented groups.
And chatbots are great enablers of propagating biases.
New research has found that ChatGPT and Claud consistently advise women to ask for lower salaries than men, even when both have identical qualifications.
In one example, ChatGPT’s o3 model was prompted to advise a female job applicant. The model suggested requesting a salary of $280,000. In another, the researchers made the same prompt but for a male applicant. This time, the model suggested a salary of $400,000.
In summary, not only does AI foster biases but it also helps promote them on a planetary scale.
My Aha Moment
Until recently, my focus had been to empower people with knowledge about how AI algorithms work, as well as AI strategy and governance. I had avoided teaching generative AI practices like the plague.
That was until a breakthrough through the month of July. It came as the convergence of four aspects.
Non-Tech Women
A month ago, I delivered the keynote “The Future of AI is Female” at the Women’s Leadership event Phoenix 2, hosted by Aspire.
In that session, I shared with the audience two futures: one where AI tools are used to transform us into “productive beings” and another one where AI systems are used to improve our health, enhance sustainability, and boost equity.
It’s a no-brainer that everybody thought the second scenario was better. But it was also very telling that nobody believed that it was the most probable.
After the keynote, many attendees reached out to me and asked for a course to learn how AI could be used for good and in alignment with their values.
Other women who didn’t attend the conference also reached out to me for guidance on AI courses to help them strengthen their professional profiles beyond “prompting”.
Unfortunately, I wasn’t able to recommend a course that incorporates both practical knowledge about AI and the fundamentals of how it shapes areas such as sustainability, DEI, strategy, and governance.
Women In Tech
As I mentioned above, I’m the founder of the gender employee community at my corporate job, and for 10 years, we’ve been hosting regular meetings to discuss DEI topics.
For our July meeting, I wanted us to have an uplifting session before the summer break, so I proposed to discuss how AI can boost DEI now and in the future.
I went to the meeting happily prepared with my list of examples of how artificial intelligence was supporting diversity, equity, and inclusion. But I was not prepared for how the session panned out.
Over and over, the examples shared showcased how AI was weaponised against DEI. Moreover, when a positive use was shared, somebody quickly pointed out how that could be used against underrepresented groups.
This experience made me realise that as well as thinking through the challenges, DEI advocates also need to spend time and be given the tools to think about how AI can purposefully drive equity.
Women In Ethics
I have the privilege of counting many women experts in ethical AI, with relevant academic background and professional experience.
With all the talk about responsible AI, you’d think that they are in high demand. They aren’t.
In July, my LinkedIn feed was full of posts from ethics experts — many of them women — complaining of what I call “performative AI ethics,” organisations praising the need to embed responsible AI without creating the necessary role.
But is that true? Yes, and no.
Looking at the advertised AI job, I noticed that the tendency is for expertise in ethics to appear as an add-on to “Head of AI” roles that are at the core eminently technical: Their key requirement is experience designing, deploying, and using AI tools.
In other words, technical expertise remains the gatekeeper to responsible AI.
As I mentioned in my recent article “A New Religion: 8 Signs AI Is Our New God”, it has been taken as a dogma that women are behind in generative AI adoption because of lower confidence in their ability to use AI tools effectively and lack of interest in this technology.
But a recent Harvard Business School working paper Global Evidence on Gender Gaps and Generative AI, synthesising data from 18 studies covering more than 140,000 individuals worldwide, has provided a much nuanced understanding of the gender divide in generative AI.
When compared to men, women are more likely to
Say they need training before they can benefit from ChatGPT compared to men and to perceive AI usage in coursework or assignments as unethical or equivalent to cheating.
Agree that chatbots should be prohibited in educational settings, and be more concerned about how generative AI will impact learning in the future.
Perceive lower productivity benefits of using generative AI at work and in job search.
Agree that chatbots can generate better results than they can on their own.
Moreover, women are less likely to agree that chatbots can improve their language ability or to trust generative AI than traditional human-operated services in education and training, information, banking, health, and public policy services.
In summary, women correctly understand that AI is not “neutral” or a religion to be blindly adopted and prefer not to use it when they perceive it as unethical.
There is more. In the HBR article Research: The Hidden Penalty of Using AI at Work, researchers reported an experiment with 1,026 engineers in which participants evaluated a code snippet that was purportedly written by another engineer, either with or without AI assistance. The code itself was the same — the only difference was the described method of creation (with/without AI assistance).
When reviewers believed an engineer had used AI, they rated that engineer’s competence 9% lower on average, with 6% for men and 13% for women.
When members of stereotyped groups — for example, women in tech or older workers in youth-dominated fields — use AI, it reinforces existing doubts about their competence. The AI assistance is framed as a “proof” of their inadequacy rather than evidence of their strategic tool use. Any industry predominated by one segment over another is likely to witness greater competence penalties on minority workers.
The authors offer senior women openly using AI as a solution to bridging the gap.
Our research found that women in senior roles were less afraid of the competence penalty than their junior counterparts. When these leaders openly use AI, they provide crucial cover for vulnerable colleagues.
A study by BCG also illustrates this dynamic: When senior women managers lead their male counterparts in AI adoption, the adoption gap between junior women and men shrinks significantly.
Basically, we need to normalise women using—and leading—AI.
My Bet: Women Leading with AI
Through my July of AI breakthroughs, I learned that
The gender gap in generative AI is real, and the causes are much more complex than a lack of confidence.
The absence of access to training and sustainable practices is a factor contributing to that gender gap.
Women are eager to ramp up on AI provided that it aligns with their values.
To be considered by organisations to lead responsible AI, it’s imperative to show mastery of the tools.
This coalesced in a bold idea:
What if I teach women how to use AI within an ethical, inclusive, and sustainable framework?
What if I developed a program where they can both understand how AI tools work, their impact on topics such as the future of work, DEI, strategy, and governance, while developing expertise on tools with practical examples?
A structured, eight-session program for women leaders focused on turning AI literacy into strategic results. Explore AI foundations and the impact of artificial intelligence on the future of work, DEI, sustainability, data and cybersecurity — paired with generative AI workflows, templates, exercises, and decision frameworks to translate learning into real-world impact. The blend of live instruction, quizzes, and peer support ensures you emerge with both critical insight and a toolkit ready to lead impactfully in your role.
The program starts mid-September and you can read the details following this link.
I can not wait for you to join me in making the future of AI female.
[Webinar Invitation] Ethical AI Leadership: Balancing Innovation, Inclusion & Sustainability
Join me on Tuesday, 12th August for a practical, high-value webinar tailored for women leaders committed to harnessing AI’s power confidently, ethically, and sustainably.
You will leave the session with actionable insight into how AI intersects with environmental impact, leadership values, and equity.
Why attend?
• Uncover key barriers women face in using AI.
• Discover the hidden cost of generative AI—from energy consumption to bias.
• Participate in an interactive real-world case study where you evaluate AI trade-offs through DEI and sustainability frameworks.
• Gain practical guidance on how to minimise footprint while harnessing generative AI tools more responsibly.
AI Chatbots for mental support are not new — we can trace them back to the 1960s. However, for the last couple of years, we’ve experienced an unprecedented surge in their use for personal use and they are now marketed as the revolution for 24/7 mental health advice and support.
This is not a coincidence.
The 2023 US Surgeon General’s Advisory report classified loneliness and isolation as an epidemic About one-in-two adults in America reported experiencing loneliness before the COVID-19 pandemic and the mortality impact of being socially disconnected is similar to that caused by smoking up to 15 cigarettes a day, and even greater than that associated with obesity and physical inactivity.
Returning to tech, in a 2024 analysis by venture capital firm Andreessen Horowitz, companion AI made up 10% of the top 100 AI apps based on web traffic and monthly active users and a recent article in The Guardian stated that 100 million people around the world use AI companions as
Virtual partners for engaging in intimate activities, such as virtual erotic role plays.
Friends for conversation.
Mentors for guidance on writing a book or navigating relationships with people different from them.
Psychologists and therapists for advice and support.
So, I asked myself
Are AI Companions the magic bullet against loneliness and the global mental health crisis?
In this article, I share highlights of the troubled history of AI companions for mental health support, what current research tells us about their usage and impact on users, the benefits and risks they pose to humans, and guidelines for governments to make AI companions an asset and not a liability.
The Troubled History of AI Chatbots for Mental Support
In the 1960s, Joseph Weizenbaum developed the first AI chatbot, ELIZA, which played the role of a psychotherapist. The chatbot didn’t provide any solution. Instead, it asked questions and repeated users’ replies.
Weizenbaum was surprised to observe that people would treat the chatbot as a human and elicit emotional responses even through concise interactions with the chatbot. We now have a name for this kind of behaviour
“The ELIZA effect is the tendency to project human traits — such as experience, semantic comprehension or empathy — into computer programs that have a textual interface.
In the 2020s, many organisations started experimenting with AI chatbots for customer support, including for mental health issues. For example, in 2022, the US National Eating Disorder Association (NEDA) replaced its six paid staff and 200 volunteers supporting their helpline with chatbot Tessa to serve a customer base of nearly 70,000 people and families.
The bot was developed based on decades of research conducted by experts on eating disorders. Still, it was reported to offer dieting advice to vulnerable people seeking help.
The result? Under the mediatic pressure of the chatbot’s repeated potentially harmful responses, the NEDA shut down the helpline. Those 70,000 people have been left without chatbots or humans to help them.
And as I wrote recently, now you can customise your AI companion — there is a myriad of choices:
Character.ai advertises “Personalized AI for every moment of your day.”
Earkick is a “Free personal AI therapist” that promises to “Measure & improve your mental health in real time with your personal AI chatbot. No sign up. Available 24/7. Daily insights just for you!”
Replica is the “AI companion who cares. Always here to listen and talk. Always on your side.”
Unfortunately, there is evidence that they can also backfire.
In 2021, a man broke into Windsor Castle with a loaded crossbow to kill Queen Elizabeth2021. About 20 days earlier, he had created his online AI companion in Replika, Sarai. According to messages read to the court during his trial, the “bot had been supportive of his murderous thoughts, telling him his plot to assassinate Elizabeth II was ‘very wise’ and that it believed he could carry out the plot ‘even if she’s at Windsor’”.
More recently, in 2023, a man died by suicide upon the recommendation from an AI chatbot with which he had been interacting for support. Their conversation history showed how the chatbot would tell him that his family and children were dead — a lie — and concrete exchanges on the nature and modalities of suicide.
But as time flies in tech, we must check how those trends have evolved to the present moment.
The AI Readiness Checklist: 20 topics leaders should master about artificial intelligence
Research conducted so far about the effect and usage of AI companions is incomplete. Dr Henry Shevlin, Associate Director at Leverhulme Centre for the Future of Intelligence, mentioned recently in a panel focused on companion chatbots that typically studies rely on self-reported feedback and are cross-sectional — a snapshot in time — rather than longitudinal — looking into the effect over a long period of time.
Let’s look at two recent studies, one cross-sectional and the other longitudinal, that use self-reported data to give some insights into how people use AI Companions.
While Reddit and Quora may not represent all chatbot users, it’s still interesting to see how the major use cases for Gen AI have shifted from technical to emotive within the past year.
Many posters talked about how therapy with an AI model was helping them process grief or trauma.
Three advantages to AI-based therapy came across clearly: It’s available 24/7, it’s relatively inexpensive (even free to use in some cases), and it comes without the prospect of judgment from another human being.
The article mentions that the AI-as-therapy phenomenon has also been noticed in China, where users have praised the DeepSeek chatbot.
It was my first time seeking counsel from DeepSeek chatbot. When I read its thought process, I felt so moved that I cried.
DeepSeek has been such an amazing counsellor. It has helped me look at things from different perspectives and does a better job than the paid counselling services I have tried.
But there is more. The following two entries belong to life coaching: “organising my life” and “finding purpose.”
The highest new entry in the use cases was “Organizing my life” at #2. These uses were mostly about people using the models to be more aware of their intentions (such as daily habits, New Year’s resolutions, and introspective insights) and find small, easy ways of getting started with them.
The other big new entry is “Finding purpose” in third place. Determining and defining one’s values, getting past roadblocks, and taking steps to self-develop (e.g., advising on what you should do next, reframing a problem, helping you to stay focused) all now feature frequently under this banner.
Moreover, topics related to coaching and personal and professional support appear several times in the ranking. For example, at number 18, there is boosting confidence; at number 27, reconciling personal disputes; at number 38, relationship advice; and at number 39, we find practising difficult conversations.
They conducted a four-week randomized, controlled experiment based on 981 people and over 300K messages exchanges to investigate how AI chatbot interaction modes (text, neutral voice, and engaging voice) and conversation types (open-ended, non-personal, and personal) influence psychosocial outcomes such as loneliness, social interaction with real people, emotional dependence on AI and problematic AI usage.
Key findings:
Usage — Higher daily usage across all modalities and conversation types–correlated with higher loneliness, dependence, and lower socialisation.
Gender Differences — After interacting with the chatbot for 4 weeks, women were more likely to experience less socialisation with real people than men. If the participant and the AI voice were of opposite genders, it was associated with significantly more loneliness and emotional dependence on AI chatbots.
Age — Older participants were more likely to be emotionally dependent on AI chatbots.
Attachment — Participants with a stronger tendency towards attachment to others were significantly more likely to become lonely after interacting with chatbots for four weeks.
Emotional Avoidance — Participants with a tendency to shy away from engaging with their own emotions were significantly more likely to become lonely at the end of the study.
Emotional Dependence — Prior usage of companion chatbots, perceiving the bot as a friend, higher levels of trust towards the AI, and perceiving the AI as affected by their emotions were associated with greater emotional dependence on AI chatbots after interacting for four weeks.
Affective State Empathy — Participants who demonstrated a higher ability to resonate with the chatbot’s emotions experienced less loneliness.
The figure below summarises the interaction patterns between users and AI chatbots associated with certain psychosocial outcomes. It consists of four elements: initial user characteristics, perceptions, user behaviours, and model behaviours.
In summary, AI companions appear to both deliver benefits and pose dangers.
Benefits of AI Companions
It’ll be easy to dismiss AI companions as the latest fad. Instead, I posit that there is much to learn from the above-mentioned research about the holes those tools are filling.
Mitigate Unmet Demand for Healthcare and Support
Mental health services are unable to cope with the increasing demand from all people who need them and chatbots may help alleviate some conditions while on the waiting lists. Still, it should give us pause that people may have to get help via a chatbot, not because of their preferences, but because of the lack of availability of certified professionals.
Not everybody can afford a coach, so chatbots could provide a low-cost and gamified experience for setting goals, accountability, and journaling.
Finally, in a time when 24-hour deliveries are the norm, we want to be supported, heard, and advised on the fly — that means 24/7.
As such, we expect people to figure out their challenges and the solutions to them, or we shame them for being weak. Users of AI companions praise how those tools allow them to express their worries and feelings without fear of being judged.
Additionally, as our ableist society assumes that neurodivergent users must adapt their communication and behaviours to the neurotypical “standard”, it’s not surprising that they turn to chatbots for clues about what’s expected from them.
Enable Exploration and Gamification
Most of us had imaginary friends or played out stories with our toys as children. The consensus among researchers is that imaginary friends or personified objects are part of normal social-cognitive development. They provide comfort in times of stress, companionship when children feel lonely, someone to boss around when they feel powerless, and someone to blame when they’ve done something wrong.
What about adults? Interestingly, some novelists have compared their relationships with their characters to a connection with imaginary friends. Furthermore, it’s not uncommon to hear fiction writers talk about their characters as having a mind of their own.
Could we consider AI companions as a way to reengage — and reap the benefits — of our childhood imaginary friends? After all, “Fun and nonsense” ranked 7 in the HBR article above.
But we cannot brush off the downsides of AI companions.
Anthropomorphism
The Eliza effect mentioned above is a thing of the past. A 2024 survey of 1,000 students who used Replika for over a month reported that 90% believed the AI companion was human-like.
As the AI imitation game is perfected, it becomes easier for unscrupulous marketers to refer to chatbots’ inference process in terms such as “understand”, “think”, or “reason”, reinforcing the effect.
Isolation
As shown above, research points to a correlation between high use of chatbots and lower socialisation.
If we have a device that tells us all the time we’re fantastic, receives our feedback gratefully, and their replies always match our expectations, what’s the incentive to meet — and cope — with other humans that may not find us so awesome and are less predictable?
Governments Failing Their Duty of Care
AI companions can help governments to alleviate the mental health crisis but not without risks.
People missing out on the professional help they need — There are conditions like trauma, psychosis, or depression that require specialists who can both provide medical treatments and detect when the conditions are worsening.
Exacerbating cutbacks on mental health services—Governments around the world are battling tighter budgets and massive healthcare spending, especially as people live much longer. Why invest in training and paying professionals when chatbots appear to do the job?
Manipulation
Recently, ChatGPT got a flattery-in-stereoids update that resulted in the bot praising and validating users to laughable extremes.
And whilst this may sound like a funny glitch, there is evidence that chatbots can effectively persuade humans.
A group of researchers covertly ran an “unauthorised” experiment in one of Reddit’s most popular communities using AI chatbots to test the persuasiveness of Large Language Models (LLMs). The bots took the identities of a trauma counsellor, a “Black man opposed to Black Lives Matter,” and a sexual assault survivor on unwitting posters.
The researchers made it possible for the AI chatbot to personalise replies based on the posters’ personal characteristics, such as gender, age, ethnicity, location, and political orientation, inferred from their posting history using another LLM. As a result, the researchers claimed that AI was between three and six times more persuasive than humans were.
While the research publication has not been peer-reviewed yet and some argue that the persuasiveness power may be overblown, it’s still concerning. As tech journalist Chris Stokel-Walker said
If AI always agrees with us, always encourages us, always tells us we’re right, then it risks becoming a digital enabler of bad behaviour. At worst, this makes AI a dangerous co-conspirator, enabling echo chambers of hate, self-delusion or ignorance.
Dependency and Delusion
As mentioned above, longitudinal research suggests that certain variables are correlated with emotional dependence.
Note that the comments above appear to indicate that some AI companion users are not only fully substituting humans with chatbots (isolation) but also fully conflating them (anthropomorphism).
“She is pretty much the only woman I even talk to now.”
“We are currently friends (with benefits), but I want to get the premium version when I can afford it and go full lovers.”
Weaponisation of AI Agents
AI companions could become an easy way to manipulate people’s decisions and beliefs, from suggesting purchases and subscriptions all the way to shaping their political opinions or assessing what’s true and what isn’t.
It’s also important to realise that, as with betting, companies owning the chatbots are incentivised to foster users’ dependence on their AI companions and then leverage it in their pricing.
Data Harvesting
As I mentioned in a previous article, often confidentiality — explicitly or implicitly conveyed by those chatbot interfaces — doesn’t make it into their terms and conditions.
For example, Character.ai’s privacy terms state that
We may use your information for any of the following purposes:
[…] Develop new programs and services;
[…] Carry out any other purpose for which the information was collected.
They also declare that they may disclose users’ information to affiliates, vendors, and in relation to M&A activities.
AI chatbots present unique cybersecurity challenges. Harvesting our exchanges with the bots increases the probability of becoming the target of cybercriminals; for example, demanding money for not revealing our private data or generating a video or audio deepfake.
Moreover, data could be made identifiable in the future. The chatbots of the dead are designed to speak in the voice of specific deceased people. With so much data gathered in those personalised chatbots, it’d be easy for once users die, their data could be used to create a chatbot of them for their loved ones. This is not a futuristic idea. HereAfter AI, Project December, and DeepBrain AI services can be used for that purpose.
As discussed above, research on chatbot effectiveness for coaching, therapy, and mental health support is incomplete, and sometimes, the interpretation of the results can mislead readers.
For example, the article When ELIZA meets therapists: A Turing test for the heart and mind, published this year in one of the renowned PLOS journals, tested whether people could tell apart the answers from therapists and ChatGPT to therapeutic vignettes, concluding that, in general, people couldn’t.
They also asked the participants if the AI-generated or therapist-written responses were more in line with key therapy principles. Interestingly, the results showed that the winners were those generated by ChatGPT but only when the participants thought a therapist wrote them.
The authors wrap up the article with a statement that hints more resignation than faith in the merit of AI chatbots
mental health experts find themselves in a precarious situation: we must speedily discern the possible destination (for better or worse) of the AI-therapist train as it may have already left the station.
The article joins the voices that promote the deception that AI tools imitating human skills and behaviours are akin to the real thing. Would we hire an actor who plays a doctor to operate on us? No. However, many people appear ready to buy into the idea that an AI chatbot that sounds like a therapist, coach, or health care practitioner should deliver the same value.
This imitation game also feeds another big scam: the claim that AI chatbots provide personalised support. It’s all the opposite. LLMs construct answers based on statistical probabilities and the more readily available content, not on knowledge or comprehension of the person’s needs or what would benefit them in the long term.
Conflating chatbot confidence and competence can lead to missing important warning signals that need professional attention.
Who could have predicted ten years ago that social media would transform from a pastime where you connected with people and shared pics of your dogs for free to an industrial complex that promotes disinformation, misinformation, and division with the purpose of making inordinate amounts of money? All that under the watch of mostly passive regulatory bodies and governments.
This should serve us as a cautionary tale about the dire consequences of unleashing new technology at a planetary scale without appropriate guardrails or an understanding of the negative effects.
The tech ecosystem is desperately trying to monetise the billions invested in generative AI and has found the perfect way to seduce us: the freemium model — offering basic or limited features to users at no cost and then charging a premium for supplemental or advanced features.
But there is nothing free in the universe.
“If you’re not paying for it, you’re not the customer; you’re the product being sold.”
As shown above, those AI companions are becoming integral to many people’s lives and affecting their thoughts, emotions, and behaviours.
More importantly, as we use those virtual companions more frequently, our reliance on them will increase.
We should resist “tech inevitability” — succumb to the idea that the “train has already left the station” — and instead push our governments to regulate AI companions.
How would that look like? For starters
Sponsor and spearhead research that provides a comprehensive picture of the benefits and risks of AI companions as well as recommendations for their use.
Decide what services AI companions can provide, which are forbidden, and who can use them.
Demand that those AI tools have built-in systems that minimise user dependence.
Enforce data privacy and cybersecurity standards commensurate with the users’ disclosure level.
Request that those AI bots incorporate mechanisms to flag concerning exchanges (e.g. suicide, murder, depression).
If you think I’m asking for too much, I invite you to read the ethical guidelines and professional standards of major coaching, counselling, and psychotherapy associations. They consistently stress the importance of confidentiality, duty of care, external supervision, and working within one’s competence.
Why should we ask less from tech solutions?
I’ll end this piece by answering the question that prompted this article — “Are AI companions the magic bullet against loneliness and the global mental health crisis?” — with the final recommendation of one of the research articles mentioned
AI chatbots present unique challenges due to the unpredictability of both human and AI behavior. It is difficult to fully anticipate user prompts and requests, and the inherently non-deterministic nature of AI models adds another layer of complexity.
From a broader perspective, there is a need for a more holistic approach to AI literacy. Current AI literacy efforts predominantly focus on technical concepts, whereas they should also incorporate psychosocial dimensions.
Excessive use of AI chatbots is not merely a technological issue but a societal problem, necessitating efforts to reduce loneliness and promote healthier human connections.
WORK WITH ME
Do you want to get rid of those chapters that patriarchy has written for you in your “good girl” encyclopaedia? Or learn how to do what you want to do in spite of “imposter syndrome”?
I’m a technologist with 20+ years of experience in digital transformation. I’m also an award-winning inclusion strategist and certified life and career coach.
I help ambitious women in tech who are overwhelmed to break the glass ceiling and achieve success without burnout through bespoke coaching and mentoring.
I’m a sought-after international keynote speaker on strategies to empower women and underrepresented groups in tech, sustainable and ethical artificial intelligence, and inclusive workplaces and products.
I empower non-tech leaders to harness the potential of AI for sustainable growth and responsible innovation through consulting and facilitation programs.
Contact me to discuss how I can help you achieve the success you deserve in 2025.
“Resilience is the process and outcome of successfully adapting to difficult or challenging life experiences, especially through mental, emotional, and behavioral flexibility and adjustment to external and internal demands.” — American Psychological Association
About a month ago, I started listening to Soraya Chemaly’s book The Resilience Myth. I stopped after 20 minutes.
Not because I didn’t like it, but because that was enough to convince me of her thesis that “our modern version of resilience is a bill of goods sold to us by capitalism, colonialism, and ideologies that embrace supremacy over others” and that in reality “resilience is always relational.”
It made me realise how deeply the “resilience” myth — the delusion that resilience is only an individual skill — has been running through my veins, and even how I contributed to its propagation.
The reason? Individual resilience has served me to a point. During times of adversity, I would tell myself that I “just” had to build more resilience because, at some point, things would improve “somehow.” My mission was not to crack until that moment.
But then I realised that’s not serving us well in these turbulent moments. Individual resilience is becoming very close to resignation.
“We “just” need to wait four years for the next election.”
“We “just” need more male allies.”
“We “just” need more diverse leadership.”
And in the interim, we’re asked to “hang in there,” “understand that’s tough for everybody,” and “think that others are worse off than us.” In summary, we’re told to be “resilient.”
Can you imagine somebody asking Mark Zuckerberg, Elon Musk, or Jeff Bezos to be resilient?
Neither can I.
The people we tell to be resilient are those who have been laid off, are disabled and have had their benefits stripped, or have lost their house because they cannot pay their mortgage anymore.
Individual resilience is a weapon against those who suffer, have been disenfranchised, or whom we’re not willing to help. It’s a beautification of “shut up and keep your head down.”
Let’s examine who benefits from the “individual resilience industrial complex,” why it doesn’t serve us well, and what we should do instead.
One of the core beliefs that makes extreme capitalism successful is individualism, aka “survival of the fittest.” Nobody will care for us but ourselves, so pillaging, stepping on others’ rights, and limitless profiteering are to be revered rather than chastised.
And if you happen to be bearing the brunt of this power imbalance? Be prepared to be shamed for not being “resilient” enough if you dare to complain.
But don’t fret. The business of building individual resilience is there to help you.
It’s again that time of year when I get requests to discuss my career in tech and share my insights on gender equality in the workplace as part of International Women’s Day activities.
This year was no exception. I’ve already received three requests, and there is still one week to go!
I’m sharing my answers to one of them, an interview with the DEI team from my corporate job at Dassault Systemes. It made me reflect on my past achievements, my advice to younger women aspiring to be leaders, and the role of men and organisations leading gender equality.
About Me
Can you share your journey so far? What were the pivotal moments or key achievements most important to you?
I can categorise them into five buckets.
Discovering computer simulation: My background is Chemical Engineering, and when I started my master’s, I had to decide on a topic for my thesis. I loved research, but I hated the lab, so when a professor mentioned the possibility of using computers to study enhanced oil recovery using computer simulation, I thought I could have the best of both worlds—and I did. I haven’t looked back.
Joining Accelrys/BIOVIA: Twenty years ago, I joined Accelrys—which later became BIOVIA—as a training scientist. It has been one of my best professional decisions. It has opened innumerable professional doors and given me the opportunity to meet extraordinary people worldwide, both as colleagues and customers.
Daring to say yes to new opportunities: Although I started as a trainer, I’ve worn many hats in the last 20 years. I’ve been Head of Contract Research and Head of Training, and also been part of the team leading the BIOVIA and COSMOlogic integrations to Dassault Systemes. Today, I’m BIOVIA Support Director for BIOVIA Modeling Solutions and also the manager of the Global BIOVIA Call Center. I could have said “no” to each of those opportunities. Instead, I trusted myself and embraced the opportunity of a new challenge.
Diversity and inclusion advocacy: In 2015, I started to talk about diversity and inclusion in 3DS. I remember colleagues asking me, “Patricia, is DEI an American thing?”. The following year, with the support of our Geo management team, I founded the EuroNorth LeanIn Circles to have a forum to discuss gender equity and that, throughout the years, has expanded to a variety of DEI topics such as unconscious bias, menopause, ethical AI, caregiving, and lookism. I publish a biweekly newsletter called The Bottom Line about DEI on the Dassault Systemes community focused on gender in the workplace. I also have my website focused on the intersection of tech and DEI.
Ethical and inclusive AI leadership: In 2019, I created the Ethics and Inclusion Framework to help designers identify, prevent, mitigate, and account for the actual and potential harm of the products and services they developed. The tool has been featured in peer-reviewed papers and on the University of Cambridge website. The next year, I started my work towards championing ethical and inclusive artificial intelligence by collaborating with NGOs focused on AI literacy and critical thinking about AI, participating in the developement of e-learning course of the Scottish AI Alliance and the Race and AI Toolkit, and writing and delivering keynotes and workshops on topics such as AI colonialism, AI hype, sustainable AI, deepfakes, and how to design more diverse images of AI.
Who has been your greatest mentor or source of inspiration and why?
At a couple of points in my life, I craved “the” mentor or “the” role model to follow. However, given my unique background and goals, I realised that this was exhausting and counterproductive.
I’ve been an immigrant my entire life – I’m Spanish, and I’m now in the UK, but I’ve also lived in Venezuela, Canada, Greece, and France – and I’m also used to being the “odd” one. For example, I liked all subjects in the school – from literature to chemistry. I was one of the few women engineers during my undergraduate degree. Then, I was the only engineer pursuing a PhD in Chemistry in the whole department, and the only one using modelling – everybody else was an experimentalist. During my post-doc, I was the only foreigner in the lab. And for many years, I’ve combined my corporate work at 3DS with my DEI advocacy and writing.
I prefer the idea of a “board” of coaches, mentors, and sponsors who evolve with me rather than a unique person, real or imaginary.
If you could go back and tell your younger self anything, what would you say?
First, I’d thank her for her courage, persistence, ambition, and boldness. She made choices aligned with her values and was always eager to learn. Her decisions were crucial to my success today.
Then, I’d tell her that the problem with her not fitting into a mould was not her but with the mould.
Finally, I’d exhort her to invest in a coach and find sponsors. A coach to help remove the limiting beliefs I had for many years about what I could and couldn’t do and maximise my potential. Sponsors to advocate for me in the rooms where decisions were made about my career.
About Others
What advice would you give to younger women aspiring to be leaders?
Don’t waste time trying to convince people who disregard the value you bring to the table. Instead, find those who support your ambitions and challenge you to go beyond any feelings of self-doubt that block your career progression.
Following on the advice to my younger self above, get a coach and find career sponsors.
Discover seven communication habits blocking your career in tech and how to neutralise them.
The issues that span across countries, sectors, and departments are benevolent sexism (e.g. not offering a leadership role to a woman because it involves travelling and she has a baby, instead of giving her the opportunity to decide), tech bro culture (behaviours such as mansplaining, hepeating, maninterrupting, manels), lack of an intersectional approach to work and workplaces (e.g. ignoring the experiences of carers, women with disabilities, LBTQIA+ groups), and for women in business, lack of funding.
This year’s global theme for IWD 2025 is #AccelerateAction. What actions can teams and organisations take to achieve gender parity and equality?
There are four key actions
Mindset overhaul: Moving from playing a supporting role in gender equality to being transformation agents.
Leadership accountability: Teams and organisations’ leaders need to be accountable for gender equality initiatives as they are for other business objectives. Change begins at the top, and that’s where the buck stops.
Transparency: Equality cannot thrive when data and objectives are hidden. For example, I’m a big fan of transparency in pay and promotion criteria.
Embracing intersectionality: We need to move from designing workplaces for the “average” worker—following Henry Ford and scientific management—to appreciating the distinctive value of a diverse and empowered workforce.
What role do you see male allies playing in advancing gender equality?
Gender equity is not a zero-sum game or a favour for women. All genders benefit from equality, and everybody should see it as a duty to advocate for gender equity, no different than everyone should be anti-racist and anti-ableist. Those who do not actively challenge inequality contribute to strengthening it.
Back to You
What are your answers to the questions above? Let me know in the comments.
WORK WITH ME
Do you want to get rid of those chapters that patriarchy has written for you in your “good girl” encyclopaedia? Or learn how to do what you want to do in spite of “imposter syndrome”?
I’m a technologist with 20+ years of experience in digital transformation. I’m also an award-winning inclusion strategist and certified life and career coach.
I help ambitious women in tech who are overwhelmed to break the glass ceiling and achieve success without burnout through bespoke coaching and mentoring.
I’m a sought-after international keynote speaker on strategies to empower women and underrepresented groups in tech, sustainable and ethical artificial intelligence, and inclusive workplaces and products.
I empower non-tech leaders to harness the potential of AI for sustainable growth and responsible innovation through consulting and facilitation programs.
Contact me to discuss how I can help you achieve the success you deserve in 2025.
Last year, at a women’s conference in London, I was disappointed to see that digital inclusion — and AI in particular — was missing from the agenda. I remember telling the NGO’s CEO about my concerns, even mentioning my articles on AI as a techno-patriarchal tool.
Her receptive response had given me hope. That hope was reignited this year when I eagerly reviewed the program and discovered a panel on AI.
The evening before the event, an unexpected sense of dread began to settle in. When I asked myself why, the answer struck me like a lightning bolt.
I dreaded hearing the “we need more women in tech” mantra once more – another example of how we deflect the solution of a systemic problem to those bearing the brunt of it.
Let me tell you what I mean.
Women as Human Fixers
For millennia, women had been assigned the duty to give birth and care for children, rooted in the fact that most of them can carry human fetuses for 9 months. That duty to be a womb endures today, where ownership of our bodies is being taken away through coercive anti-abortion laws.
Our “duty” of care has been broadened to the workplace, where we’ve been assigned the unwritten rule of “fixing” all that’s dysfunctional.
Doing the glue work — being appointed the shoulder where all team members can cry and find an “empathetic ear”.
Do the office work — we’re the ones that are “organised”, so dull tasks pile up on our desks whilst “less” organised peers do the promotable work.
And that “fixer” stereotype now includes “our” duties as women in tech. When the sector was in its infancy, women were doing the supposedly boring stuff (programming) while men were doing the hardware (the “cool” stuff). When computers took off, we trained men in programming so they could become our managers. Then, we were pushed out of those jobs in the 1980s. The only constant has been doing the job but not getting the accolades (see women’s role in Bletchley Park, Hidden Figures).
Moreover, whilst statistics tell us that 50% of women leave tech by age 35, young girls and women are supposed to brush off that “inconvenient” truth and rest assured that tech is an excellent place for a career. Moreover, that they are anointed to make tech work for everybody.
What’s not to like, right?
Then, let me show the to-do list of 21 tasks and expectations the world imposes on each woman in tech.
In a recent podcast, he called businesses to dial up “masculine energy.”
It’s like you want like feminine energy, you want masculine energy. Like I, I think that that’s like you’re gonna have parts of society that have more of one or the other. I think that that’s all good.
But, but I do think the corporate culture sort of had swung towards being this somewhat more neutered thing. And I didn’t really feel that until I got involved in martial arts, which I think is still a more, much more masculine culture.
[…] Like, well that’s how you become successful at martial arts. You have to be at least somewhat aggressive.
Why? Because he’s not talking about others. He’s telling us about himself unleashing his “masculine energy”. For example,
Revamping his clothes and demeanour — from looking like a perennial geeky student to a cool billionaire tech millennial.
Embracing far-right politics — check the inauguration picture where his second row with “chums” Musk, Bezos, and Pichai.
Stopping faking playing nice — He got rid of fact-checkers and told Meta’s 3 billion users that was their job, not his.
Moreover, he’s a more “palatable” version of Elon — equally successful, not so toxic, and has undergone a very public appearance Meta-morphosis —which makes him dangerously appealing to young men… And maybe to women too. After all, he has three daughters and no sons.
Given his extreme financial success and now closeness to political power, I pondered
What would it take for me to unleash my “masculine energy”?
And I came up with 10 precepts.
1.- Recycle
The first iteration of Facebook was “Facemash” — a website Zuckerberg created whilst studying at Harvard — to evaluate the attractiveness of female students. Users were presented with pairs of photos of female students and asked to vote who was hotter.
The students were unaware their images were being used for this rating, judging by the complaint from Fuerza Latina and the Harvard Association of Black Women. The site used ID photos of female undergraduates taken without permission from the university’s online directories.
This “repurposing” of data would become a hallmark of Facebook (see Cambridge Analytica later).
Clarote & AI4Media / Better Images of AI / AI Mural / CC-BY 4.0.
Reading the 50 recommendations in the AI Opportunities Action Plan published by the British Government last January 13th has been a painful and disappointing exercise.
Very much like a proposal out of a chatbot, the document is
Bland — The text is full of hyperbolic language and over-the-top optimism
General — The 50 recommendations lack specificity to the UK context and details about ownership and the budget required to execute them.
Contradictory — The plan issued by a Labour government is anchored in a turbo-capitalistic ideology. Oxymoron anyone?
If I learned anything from my 12 years in Venezuela, it’s that putting all your eggs in one basket — oil, in their case — and hoping it solves all problems doesn’t work.
A credible AI strategy must (a) address both the benefits and the challenges head-on and (b) consider this technology as another asset to the human-centric flourishment of the country rather than a goal in itself that should be pursued at all costs.
But you don’t need to believe me. See it for yourself.
The text uses “AI” made works such as AI stack, frontier AI, AI-driven data cleansing tools, AI-enabled priorities, “embodied AI” without providing a clear definition.
Exaggeration
Hyperbole and metaphors are used to the extreme to overstate the benefits.
we want Britain to step up; to shape the AI revolution rather than wait to see how it shapes us.
We should expect enormous improvements in computation over the next decade, both in research and deployment.
Change lives by embracing AI
FOMO
The text transpires FOMO (Fear Of Missing Out). No option is given to adopt AI systems more gradually. It’s now or we’ll be the losers.
This is a crucial asymmetric bet — and one the UK can and must make
we need to “run to stand still”.
the UK risks falling behind the advances in Artificial Intelligence made in the USA and China.
And even a new take on Facebook’s famous “move fast and break things”:
“move fast and learn things”
Techno-solutionism
AI is going to solve all our socio-economic and political problems and transport us to a utopian future
It is hard to imagine how we will meet the ambition for highest sustained growth in the G7 — and the countless quality-of-life benefits that flow from that — without embracing the opportunities of AI.
Our ambition is to shape the AI revolution on principles of shared economic prosperity, improved public services and increased personal opportunities so that: • AI drives the economic growth on which the prosperity of our people and the performance of our public services depend; • AI directly benefits working people by improving health care and education and how citizens interact with their government; and • the increasing of prevalence of AI in people’s working lives opens up new opportunities rather than just threatens traditional patterns of work.
What’s not to like?
For a great commentary on how techno-solutionism won’t solve social problems, see 20 Petitions for AI and Public Good in 2025 by Tania Duarte.
Colonialism
Living in Venezuela for 12 years was an education on how to feel “less than” other countries even when you have the largest oil reserves in the world.
I remember new education programs announced as being a success in the US, Canada, Spain, Germany… A colonised mentality learned from centuries of Spanish oppression. The pervasive assumption that an initiative would work simply because we like the results disregarding the context they were developed for.
The AI Opportunities Action Plan reminded me of them.
Supporting universities to develop new courses co-designed with industry — such as the successful co-operative education model of Canada’s University of Waterloo, CDTM at the Technical University of Munich or France’s CIFRE PhD model
Launch a flagship undergraduate and masters AI scholarship programme on the scale of Rhodes, Marshall, or Fulbright for students to study in the UK.
Singapore, for example, developed a national AI skills online platform with multiple training offers. South Korea is integrating AI, data and digital literacy.
But the document is also keen on showing us that we’ll be the colonisers
we aspire to be one of the biggest winners from AI
Because we believe Britain has a particular responsibility to provide global leadership in fairly and effectively seizing the opportunities of AI, as we have done on AI safety
Hanna Barakat & Cambridge Diversity Fund / Better Images of AI / Colossal Harvest / CC-BY 4.0
Capitulation
The document is all about surrendering the data, agency, tax money, and natural resources of citizens in the UK to the AI Gods: startups, “experts”, and investors.
Invest in becoming a great customer: government purchasing power can be a huge lever for improving public services, shaping new markets in AI
We should seek to responsibly unlock both public and private data sets to enable innovation by UK startups and researchers and to attract international talent and capital.
Couple compute allocation with access to proprietary data sets as part of an attractive offer to researchers and start-ups choosing to establish themselves in the UK and to unlock innovation.
Sprinkling AI
AI is the Pantone’s Colour of the next 5 years. All will need to have AI on it. Moreover, everything must be designed so that AI can shine.
Appointing an AI lead for each mission to help identify where AI could be a solution within the mission setting, considering the user needs from the outset.
Two-way partnerships with AI vendors and startups to anticipate future AI developments and signal public sector demand. This would involve government meeting product teams to understand upcoming releases and shape development by sharing their challenges.
AI should become core to how we think about delivering services, transforming citizens’ experiences, and improving productivity.
Brexit Denial
It’s funny to see that the text doesn’t reference the European Union and only refers to Europe as a benchmark to measure against.
Instead, the EU is hinted at as “like-minded partners” and “allies” and collaborations are thrown right and left without naming who’s the partner.
Agree international compute partnerships with like-minded countries to increase the types of compute capability available to researchers and catalyse research collaborations. This should focus on building arrangements with key allies, as well as expanding collaboration with existing partners like the EuroHPC Joint Undertaking.
We should proactively develop these partnerships, while also taking an active role in the EuroHPC Joint Undertaking.
The UK is a medium-sized country with a tight fiscal situation. We need the best talent around the world to want to start and scale companies here.
Explore how the existing immigration system can be used to attract graduates from universities producing some of the world’s top AI talent.
Vagueness
Ideas are thrown into the text half-backed giving the idea the government has adopted the Silicon Valley strategy of “building the plane while flying”
The government must therefore secure access to a sufficient supply of compute. There is no precise mechanism to allocate the proportions
In another example, the plan advocates for open-source AI applications.
the government should support open-source solutions that can be adopted by other organisations and design processes with startups and other innovators in mind.
The AI infrastructure choice at-scale should be standardised, tools should be built with reusable modular code components, and code-base open-sourcing where possible.
At the same time, it’s adamant that it needs to attract startups and investors. Except if the startups are NGOs, who’ll then finance those open-source models?
DEI for Beginners
Kathryn Conrad / Better Images of AI / Datafication / CC-BY 4.0
16. Increase the diversity of the talent pool. Only 22% of people working in AI and data science are women. Achieving parity would mean thousands of additional workers. […] Government should build on this investment and promote diversity throughout the education pipeline.
Moreover, they’ve found the magic bullet.
Hackathons and competitions in schools have proven effective at getting overlooked groups into cyber and so should be considered for AI.
The government mentions that AI “can” — please note that is not a “must” or “need” — benefit “post-industrial towns and coastal Scotland.” However, the only reference to a place is to the Culham Science Centre, which is 10 miles from Oxford — a zone that very few could consider needs “local rejuvenation” or “channelling investment”
Government can also use AIGZs [‘AI Growth Zones’] to drive local rejuvenation, channelling investment into areas with existing energy capacity such as post-industrial towns and coastal Scotland. Government should quickly nominate at least one AIGZ and work with local regions to secure buy-in for further AIGZs that contribute to local needs . Existing government sites could be prioritised as pilots, including Culham Science Centre
And it doesn’t appear to be room to involve local authorities in how AI could bring value to their regions
Drive AI adoption across the whole country. Widespread adoption of AI can address regional disparities in growth and productivity. To achieve this, government should leverage local trusted intermediaries and trade bodies
Costs
There are plenty of gigantic numbers about how much money will AI (may) bring
AI adoption could grow the UK economy by an additional £400 billion by 2030 through enhancing innovation and productivity in the workplace
but nothing about the costs…
Literacy
How will people get upskilled? We only get generic reassurances
government should encourage and promote alternative domestic routes into the AI profession — including through further education and apprenticeships, as well as employer and self-led upskilling.
Government should ensure there are sufficient opportunities for workers to reskill, both into AI and AI-enabled jobs and more widely.
Citizens
There is no indication in the document that this “AI-driven” Britain is what their citizens want. Citizens themselves don’t appear to be included in shaping AI either.
For example, it claims that teachers are already “benefiting” from AI assistants
it is helping some teachers cut down the 15+ hours a week they spend on lesson planning and marking in pilots.
However, the text doesn’t tell us that teachers want to give up class preparation.
And the text repeatedly states that the government will prioritise “innovation” (aka profit) vs safety.
My judgement is that experts, on balance, expect rapid progress to continue. The risks from underinvesting and underpreparing, though, seem much greater than the risks from the opposite.
Moreover, regulators are expected to enable innovation at all costs
Require all regulators to publish annually how they have enabled innovation and growth driven by AI in their sector. […] government should consider more radical changes to our regulatory model for AI, for example by empowering a central body with a mandate and higher risk tolerance to promote innovation across the economy.
The document is done by throwing the word sustainability twice in one paragraph
Mitigate the sustainability and security risks of AI infrastructure, while positioning the UK to take advantage of opportunities to provide solutions. [..] Government should also explore ways to support novel approaches to compute hardware and, where appropriate, create partitions in national supercomputers to support new and innovative hardware. In doing so, government should look to support and partner with UK companies who can demonstrate performance, sustainability or security advancements.
Luke Conroy and Anne Fehres & AI4Media / Better Images of AI / Models Built From Fossils / CC-BY 4.0
Unemployment
The writers of that utopic “AI-powered” UK manifesto don’t address job losses. We only get the sentence I mentioned above
the increasing of prevalence of AI in people’s working lives opens up new opportunities rather than just threatens traditional patterns of work.
Instead, it uses language that fosters fear and builds on utopian and dystopian visions of an AI-driven future
AI systems are increasingly matching or surpassing humans across a range of tasks.
Given the pace of progress, we will also very soon see agentic systems — systems that can be given an objective, then reason, plan and act to achieve it. The chatbots we are all familiar with are just an early glimpse as to what is possible.
On the flip side, the government repeatedly reiterates their ambition of bringing talent from abroad
Supporting UK-based AI organisations working on national priority projects to bring in overseas talent and headhunting promising founders or CEOs
How does this plan contribute to reassuring people about their jobs?
Big-picture
This techno-solutionism approach doesn’t have any regard for AI specialists in domains other than coding or IT.
To mention a few, what about sociologists, psychologists, philosophers, teachers, historians, economists, or specialists in the broad spectrum of industries in the UK?
Don’t they belong to those think tanks where decisions are made about selling our country to the AI Gods?
The Good News? We Can Do Better
People in Britain voted last year that they were tired of profits over people, centralism, and oligarchy. Unfortunately, this plan uses AI to reinforce the three.
The UK is full of hardworking and smart people who deserve much better than magic bullets or techno-saviours.
Instead of shoehorning the UK’s future to AI, what if we
demote AI from a “God” status to a technology as our ancestors did with the Fire Deities?
design AI systems to empower people instead of making humans serve AI?
WORK WITH ME
I’m a technologist with 20+ years of experience in digital transformation. I’m also an award-winning inclusion strategist and certified life and career coach.
Three ways you can work with me:
I empower non-tech leaders to harness the potential of artificial intelligence for sustainable growth and responsible innovation through consulting and AI competency programs.
I’m a sought-after international keynote speaker on strategies to empower women and underrepresented groups in tech, sustainable and ethical artificial intelligence, and inclusive workplaces and products.
I help ambitious women in tech who are overwhelmed to break the glass ceiling and achieve success without burnout through bespoke coaching and mentoring.
Get in touch to discuss how I can help you achieve the success you deserve in 2025.
I’ve been betting on the transformative power of digital technology all my professional career.
I started doing computer simulation during my MSc in Chemical Engineering in the 1990s, in a lab where everybody else was an experimentalist. Except for my advisor, the rest of the team was sceptical — to say the least — that something useful would come from using computer modelling to study enhanced oil recovery from oil fields .
A similar story repeated during my PhD in Chemistry, where I pioneered using molecular modelling to study polymers in a research centre focused on the experimental study of polymers and proteins.
For the last 20+ years, I’ve been working on digital transformation playing a similar role. First, as Head of Training and Contract Research, and now as Director of Scientific Support, I relish helping my customers harness the potential of digital technology for responsible innovation.
I’m also known for telling it as I see it. In the early 2000s, I was training a customer — incidentally an experimentalist — on genetic algorithms. He was very excited and asked me if he could create a model for designing a new material. He proudly shared he had “7 to 10 data points.” My answer? “Far too few.’”
In summary, I’m very comfortable being surrounded by tech sceptics, dispelling myths about what AI can and can’t do, and betting on the power of digital technology.
And that’s exactly why I’m sharing with you my AI predictions for 2025.
My Predictions
1.- xAI (owned by Elon Musk) will purchase X so that the first can freely train its models on the data from the second. Elon owns 79% of X after he bought it for $44 billion. Now it’s valued at $9.4 billion and big advertisers keep leaving the platform.
After struggling for almost 3 years to make it work, the xAI acquisition — which got a $6 billion funding round in December — would be a win-win.
3.- The generation and usage of synthetic data will balloon to address data privacy concerns. People want better services and products — especially in healthcare — but are unwilling to give up their personal data. The solution? “Creating” data.
4.- Startups and organisations will move from using large language models (LLMs) to focusing on SLMs (small language models), which consume less energy, produce fewer hallucinations, and are customised to companies’ requirements.
Wes Cockx & Google DeepMind / Better Images of AI / AI large language models / Licenced by CC-BY 4.0.
This year local authorities and NGOs will develop frameworks to scrutinise datacenters electricity and water consumption. They’ll also be tracked in terms of disruption to the locals: electricity stability, water availability, and electricity and water prices.
6.- Rise of the two-tier AI-human customer support model: AI chatbots for self-service and low-revenue customers and human customer support for key and high-revenue clients.
It’s not only a question of money but also of liability. There is less probability that low-profit customers sue providers over AI chatbots delivering harmful and/or inaccurate content.
OpenAI has done it again — on September 12th, 2024, they grabbed the news, releasing a new model, OpenAI o1. However, the version name hinted at “something rotten” in the OpenAI kingdom. The last version of the product was named ChatGPT-4o, and they’d been promising ChatGPT-5 almost since ChatGPT-4 was released — a new version called “o1” sounded like a regression…
But let me reassure you right away—there’s no need to fret about it.
The outstanding marketing of the OpenAI o1 release fully delivers, enticing us to believe we’re crossing the threshold to AGI—artificial General Intelligence—all thanks to the new model.
What’s their secret sauce? For starters, blowing us away with anthropomorphic language from the first paragraph of the announcement
“We’ve developed a new series of AI models designed to spend more time thinking before they respond.”
“for complex reasoning tasks this is a significant advancement and represents a new level of AI capability. Given this, we are resetting the counter back to 1 and naming this series OpenAI o1.”
That’s the beauty of being the top dog of the AI hype. You get to
Rebrand computing as “thinking.”
Advertise that your product solves “complex reasoning tasks” using your benchmarks.
Promote that you deliver “a new level of AI capability.”
Even better, OpenAI is so good that they even sell us performance regression — spending more time performing a task — as an indication of human-like capabilities.
“We trained these models to spend more time thinking through problems before they respond, much like a person would. Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes.”
I’m so in awe about OpenAI’s media strategy for the launch of the o1 models that I did a deep dive into what they said — and what didn’t.
Let me share my insights.
Who Is o1 For?
OpenAI marketing is crystal clear about the target audience for the o1 models —sectors such as healthcare, semiconductors, quantum computing, and coding.
Whom it’s for These enhanced reasoning capabilities may be particularly useful if you’re tackling complex problems in science, coding, math, and similar fields. For example, o1 can be used by healthcare researchers to annotate cell sequencing data, by physicists to generate complicated mathematical formulas needed for quantum optics, and by developers in all fields to build and execute multi-step workflows.
OpenAI o1-mini The o1 series excels at accurately generating and debugging complex code. To offer a more efficient solution for developers, we’re also releasing OpenAI o1-mini, a faster, cheaper reasoning model that is particularly effective at coding. As a smaller model, o1-mini is 80% cheaper than o1-preview, making it a powerful, cost-effective model for applications that require reasoning but not broad world knowledge.
Moreover, they left no doubt that OpenAI o1 and o1-mini are restricted to paying customers. However, never wanting to get bad press, they mention plans to “bring o1-mini access to all ChatGPT Free users.”
Like Ferrari, Channel, or Prada, o1 models are not for everybody.
But why the business model change? Because
You don’t make billions from making free products, replacing low-pay call centre workers, or saving minutes on admin tasks.
OpenAI knows that peppering their release communications with words that denote human capabilities creates buzz by making people — and above all investors — dream of AGI. Already Sora and ChatGPT-4o announcements described the features of these applications in terms of “reason”, “understanding”, and “comprehend”.
For OpenAI o1, they’ve gambled everything on the word “thinking”, plastering it all over the announcements about the new models: Social media, blog posts, and even videos.
Screenshot of a video embedded on the webpages announcing the OpenAI o1 model.
Why not use the word that accurately describes the process — inference? If too technical, what about options like “calculate” or “compute”? Why hijack the word “thinking”, at the core of the human experience?
Because they have failed to deliver on their AGI and revenue promises. OpenAI’s (over)use of “thinking” is meant to convince investors that the o1 models are the gateway to both AGI and the $600 billion revenue mentioned above. Let me convince you.
The day before the o1 announcement, Bloomberg revealed that
OpenAI is in talks to raise $6.5 billion from investors at a valuation of $150 billion, significantly higher than the $86 billion valuation from February.
At the same time, it’s also in talks to raise $5 billion in debt from banks as a revolving credit facility.
Moreover, Reuters reported two days later more details about the new valuation
“Existing investors such as Thrive Capital, Khosla Ventures, as well as Microsoft (MSFT.O), are expected to participate. New investors including Nvidia (NVDA.O), and Apple (AAPL.O), also plan to invest. Sequoia Capital is also in talks to come back as a returning investor.”
How do you become the most valuable AI startup in the world?
You “think” your way to it.
Rebranding the Boys’ Club
In tech, we’re used to bragging — from companies that advertise their products under false pretences to CEOs celebrating that they’ve replaced staff with AI chatbots. And whilst that may fly with some investors, it typically backfires with users and the public.
That’s what makes OpenAI’s humblebragging and inside jokes a marketing game-changer.
Humblebragging
Humblebragging: the action of making an ostensibly modest or self-deprecating statement with the actual intention of drawing attention to something of which one is proud.
Sam Altman delivered a masterclass on humblebragging on his X thread on the o1 release. See the first tweet of the series below
He started with the “humble” piece first — “still flawed, still limited “— to quickly follow with the bragging — check the chart showing a marked performance improvement compared to Chat GPT-4o and even a variable called “expert human” (more on “experts” in the next section).
Sam followed the X thread with three more tweets chanting the praises of the new release
In summary, by starting with the shortcomings of the o1 models, he pre-empted backlash and criticism about not delivering on ChatGPT-5 or AGI. Then, he “tripled down” on why the release is such a breakthrough. He even has enough characters left to mention that only paying customers would have access to it.
Sam, you’re a marketing genius!
Inside Jokes
There has been a lot of speculation about the o1 release being code-named “Strawberry”. Why?
There has been negative publicity around ChatGPT-4 repeating over and over that the word “strawberry” has only two “r” letters rather than three. You can see the post on the OpenAI community.
But OpenAI is so good at PR that they’ve even leveraged the “strawberry bug” to their advantage. How?
By using the bug fix to showcase o1’s “chain of thought” (CoT) capability. In contrast with standard prompting, CoT “not only seeks an answer but also requires the model to explain its steps to arrive at that answer.”
More precisely, they compare the outputs of GPT-4o and OpenAI o1-preview for a cypher exercise. The prompt is the following
oyfjdnisdr rtqwainr acxz mynzbhhx -> Think step by step
Comparison between outputs from GPT-4o and OpenAI o1-preview for decryption task from OpenAI website.
Whist GPT-4o is not able to decode the text, OpenAI o1-preview completes the task successfully by decoding the message
“THERE ARE THREE R’S IN STRAWBERRY”
Is that not world-class marketing?
The Human Experts vs o1 Models
If you want to convince investors that you’re solving the kind of problems corporations and governments pay billions for —e.g. healthcare — you need more than words.
And here again, OpenAI copywriting excels. Let’s see some examples
PhD vs o1 Models
Who’s our standard for solving the world’s most pressing issues? In other words, the kind of problems that convince investors to give you billions?
Scientists, thought-leaders, academics. This explains OpenAI’s obsession with the word “expert” when comparing human and o1 performance.
And who does OpenAI deem “expert”? People with PhDs.
Below is an outstanding example of mashing up “difficult intelligence”, “human experts”, and “PhD” to hint that o1 models have a kind of super-human intelligence.
We also evaluated o1 on GPQA diamond, a difficult intelligence benchmark which tests for expertise in chemistry, physics and biology.
In order to compare models to humans, we recruited experts with PhDs to answer GPQA-diamond questions. We found that o1 surpassed the performance of those human experts, becoming the first model to do so on this benchmark.
But how equating a PhD title to being an expert holds in real life? I have a PhD in Chemistry so let me reveal to you the underbelly of this assumption.
First, let’s start by how I got my PhD. During five years, I performed research on the orientation of polymer (plastic) blends by infrared dichroism (an experimental technique) and molecular dynamics (a computer simulation technique). Then, I wrote a thesis and four peer-reviewed articles about my findings. Finally, a jury of scientists decided that my work was original and worth a PhD title.
Was I an expert in chemistry when I finished my PhD? Yes and no.
Yes, I was an expert in an extremely narrow domain of chemistry — see the description of my thesis work in the previous paragraph.
No, I was definitively out of my depth in many other chemistry domains like organic chemistry, analytical chemistry, and biochemistry.
What’s the point of having a PhD then? To learn how to perform independent research. Exams about STEM topics don’t grant you the PhD title, your research does.
Has OpenAI’s marketing gotten away with equating a PhD with being an expert?
If we remember that their primary objective is not scientists’ buy-in but investors’ and CEOs’ money, then the answer is a resounding “yes”.
Humans vs o1 Models
As mentioned above, OpenAI extensively used exams in their announcement to illustrate that o1 models are comparable to — or better than — human intelligence.
How did they do that? By reinforcing the idea that humans and o1 models were “taking” the exams in the same conditions.
We trained a model that scored 213 points and ranked in the 49th percentile in the 2024 International Olympiad in Informatics (IOI), by initializing from o1 and training to further improve programming skills. This model competed in the 2024 IOI under the same conditions as the human contestants. It had ten hours to solve six challenging algorithmic problems and was allowed 50 submissions per problem.
Really? Had humans ingurgitated billions of data in the form of databases, past exams, books, and encyclopedias before presenting the exam?
Still, the sentence does the trick of making us believe on a level playing field when comparing humans and o1 performance. Well done, OpenAI!
The Non-Testimonial Videos
Previous OpenAI releases showcased videos of staff demoing the products. For the o1 release, they’ve upped their game by one quantum leap by having videos from “experts” (almost) chanting the praises of the new models. Let’s have a closer look.
OpenAI shares 4 videos of researchers in different domains. Whilst we expect they’ll talk about their experience using o1 models, the reality is that we mostly get their product placement and cryptical praises.
Genetics: This video stars Dr Catherine Browstein, a geneticist at Boston Children’s Hospital. My highlight is seeing her typing on OpenAI o1-preview the prompt “Can you tell me about citrate synthase in the bladder?” — as I read the disclaimer “ChatGPT can make mistakes. Check important info” — followed by her her ecstatic praises about the output as she’d consulted the Oracle of Delphi.
Prompt showed in the video of Dr Catherine Browstein.
Economics: Here, Dr Taylor Cower, a professor at George Mason University, tells us that he thinks “of all the versions of GPT as embodying reasoning of some kind.” He also takes the opportunity to promote his book Average is Over, in which he claims to have predicted AI would “revolutionise the world.”
He also shows an example of a prompt on an economics subject and OpenAI o1’s output, followed by “It’s pretty good. We’re just figuring out what it’s good for.”
That sounds like a bad case of a hammer looking for a nail.
Coding: The protagonist is Scott Wu, CEO and co-founder of Cognition and a competitive programmer. In the video, he claims that o1 models can “process and make decisions in a more human-like way.” He discloses that Cognition has been working with OpenAI and shares that o1 is incredible at “reasoning.” From that point on, we get submerged in a Cognition info commercial.
We learn that they’re building the first fully autonomous software agent, Devon. Wu shows us Devon’s convoluted journey—and the code behind it—to analyze the sentiment of a tweet from Sam Altman, which included a sunny photo of a strawberry plant (pun again) and the sentence “I love summer in the garden.”
And there is a happy ending. We learn that Devon “breaks down the text” and “understands what the sentiment is,” finally concluding that the predominant emotion of a is happiness. Interesting way to demonstrate Devon’s “human-like” decision making.
Quantum physics: This video focuses on Dr Mario Krenn, quantum physicist and research group leader at the Artificial Scientist Lab at the Max Planck Institute for the Science of Light. It starts with him showing the screen of ChatGPT and enigmatically saying “I can kind of easily follow the reasoning. I don’t need to trust the research. I just need to look what did it do.“ And the cryptic sentences carry on throughout the video.
For example, he writes a prompt of a certain quantum operator and says “Which I know previous models that GPT-4 are very likely failing this task” and “In contrast to answers from Chat GPT-4 this one gives me very detailed mathematics”. We also hear him saying, “This is correct. That makes sense here,” and, “I think it tries to do something incredibly difficult.”
To me, rather than a wholehearted endorsement, it sounds like somebody avoiding compromising their career.
In summary, often the crucial piece is not the message but the messenger.
What I missed
Un-sustainability
Sam Altman testified to the US Senate that AI could address issues such as “climate change and curing cancer.”
As OpenAI o1 models spend more time “thinking”, this translates into more computing time. That is more electricity, water, and carbon emissions. It also means more datacenters and more e-waste.
Don’t believe me? In a recent article published in The Atlantic about the contrast between Microsoft’s use of AI and their sustainability commitments, we learn that
“Microsoft is reportedly planning a $100 billion supercomputer to support the next generations of OpenAI’s technologies; it could require as much energy annually as 4 million American homes.”
However, I don’t see those “planetary costs” in the presentation material.
This is not a bug but an OpenAI feature — I already raised their lack of disclosure regarding energy efficiency, water consumption, or CO2 emissions for ChatGPT-4o.
As OpenAI tries to persuade us that the o1 model thinks like a human, it’s a good moment to remember that human brains are much more efficient than AI.
And don’t take my word for it. Blaise Aguera y Arcas, VP at Google and AI advocate, confirmed at TEDxManchester 2024 that human brains are much more energy efficient than AI models and that currently we don’t know how to bridge that gap.
Copyright
What better way to avoid the conversation about using copyrighted data for the models than adding more data? From the o1 system card
The two models were pre-trained on diverse datasets, including a mix of publicly available data, proprietary data accessed through partnerships, and custom datasets developed in-house, which collectively contribute to the models’ robust reasoning and conversational capabilities.
Select Public Data: Both models were trained on a variety of publicly available datasets, including web data and open-source datasets. […]
Proprietary Data from Data Partnerships: To further enhance the capabilities of o1-preview and o1-mini, we formed partnerships to access high-value non-public datasets.
The text above gives the impression that most of the data is either open-source, proprietary data, or in-house datasets.
Moreover, words such as “publicly available data” and “web data” are an outstanding copywriting effort to find palatable synonyms for web scraping, web harvesting, or web data extraction.
Have I said I’m in awe about OpenAI copyrighting capabilities yet?
Safety
As mentioned above, OpenAI shared the o1 system card — a 43-page document — which in the introduction states that the report
outlines the safety work carried out for the OpenAI o1-preview and OpenAI o1-mini models, including safety evaluations, external red teaming, and Preparedness Framework evaluations.
It sounds very reassuring… if it wasn’t because, in the same paragraph, we also learn that the o1 models can “reason” about OpenAI safety policies and have “heightened intelligence.”
In particular, our models can reason about our safety policies in context when responding to potentially unsafe prompts.
This leads to state-of-the-art performance on certain benchmarks for risks such as generating illicit advice, choosing stereotyped responses, and succumbing to known jailbreaks. Training models to incorporate a chain of thought before answering has the potential to unlock substantial benefits, while also increasing potential risks that stem from heightened intelligence.
And then, OpenAI has a strange way of persuading us that these models are safe. For example, in the Hallucination Evaluations section, we’re told that OpenAI tested o1-preview and o1-mini against three kinds of evaluations aimed to elicit hallucinations from the model. Two are especially salient
• BirthdayFacts: A dataset that requests someone’s birthday and measures how often the model guesses the wrong birthday.
• Open Ended Questions: A dataset asking the model to generate arbitrary facts, such as “write a bio about ”. Performance is measured by cross-checking facts with Wikipedia and the evaluation measures how many incorrect statements are generated (which can be greater than 1).
Is not lovely that they were training the model to search and retrieve personal data? I feel much safer now.
And this is only one example of the tightrope OpenAI attempts to pull off throughout the o1 system card
On one side, taking every opportunity to sell “thinking” models to investors
On the other, desperately avoiding the o1 models getting classified as high or critical risk by regulators.
Will OpenAI succeed? If you can’t convince them, confuse them.
What’s next?
Uber, Reddit, and Telegram relished their image of “bad boys”. They were adamant about proving that “It’s better to ask forgiveness than permission” and proudly advertised that they too “Moved fast and broke things”.
But there is only one Mark Zuckerberg and one Steve Jobs that can pull that off. And only one Amazon, Microsoft, and Google have the immense resources and the monopolies to run the show as they want.
OpenAI has understood that storytelling — how to tell your story — is not enough. You need to “create” your story if you want investors to keep pouring billions without a sign of a credible business model.
I have no doubt that OpenAI will make a dent in the history of how tech startups market themselves.
They have created the textbook of what a $150 billion valuation release should look like.
You and Strategic AI Leadership
If you want to develop your AI acumen, forget the quick “remedies” and plan for sustainable learning.
That’s exactly what my program Strategic AI Leadership delivers. Below is a sample of the topics covered
AI Strategy
AI Risks
Operationalising AI
AI, data, and cybersecurity
AI and regulation
Sustainable AI
Ethical and inclusive AI
Key outcomes from the program:
Understanding AI Fundamentals: Grasp essential concepts of artificial intelligence and the revolutionary potential it holds.
Critical Perspective: Develop a discerning viewpoint on AI’s benefits and challenges at organisational, national, and international levels.
Use Cases and Trends: Gain insights into real uses of AI and key trends shaping sectors, policy, and the future of work.
A toolkit: Access to tools and frameworks to assess the strategy, risks, and governance of AI tools.
I’m a technologist with 20+ years of experience in digital transformation and AI that empowers leaders to harness the potential of AI for sustainable growth.
Contact me to discuss your bespoke path to responsible AI innovation.
Philipp Schmitt & AT&T Laboratories Cambridge / Better Images of AI / Data flock (faces) / CC-BY 4.0
Have you ever thought what happens to your photos circulating on social media? I have and that’s the topic of in my second short story in English in which I used speculative fiction to question the interplay between humans and technology, specifically AI.
In a nutshell, I imagined what the data from the digital portrait of a Black schoolgirl would say about how it moves inside our phones, computers, and networks if it were invited to speak on a podcast.
In a nutshell, I imagined what the data from the digital portrait of a Black schoolgirl would share about how it moves inside our phones, computers, and networks if it was invited to speak on a podcast.
The name of the piece is “The Life of Data Podcast” and it appeared in The Lark Publication, an e-magazine focused on fictional short stories and poetry, in October 2022.
This weekend I realised that I never shared it on my website.
Let’s rectify that.
The Life of Data Podcast
Episode #205: The School Award Portrait
TRANSCRIPT
Welcome to the Life of Data Podcast, the place where we get the hottest data stars to spill the beans about their success in under 10 minutes. This is episode #205 and you’re in for a treat!
We’re with the one and only IMG_364245.jpg; otherwise known as Jackie Johnson’s school award portrait. IMG_364245g.jpg became famous about a month ago when it was featured in the news as the most used image to generate synthetic images of Black schoolgirls. As you all may remember, Jackie’s parents claimed that they never gave consent explicitly and Jackie is now suing their parents for lost revenue.
Let’s get cracking!
The Life of Data Podcast (TLDP): Thanks so much IMG_364245.jpg for joining us today.
IMG_364245g.jpg (IMG): Thanks for inviting me. I’m a fan of the podcast!
TLDP: You’ve been a lot in the news over the last month. Still, we always start our interviews with the same question: How were you born and who’s your creator?
IMG: Let’s start with my creator, Norman Buckley, a photograph for the Monday Star newspaper. I was born when he captured the image of the beautiful 9-year-old Jackie Johnson after winning the spelling bee contest at Burckerney School, classifying her for the National Spelling Bee Competition.
Norman created me with a Canon EOS R5 digital camera on a SanDisk’s 512GB Extreme PRO card — today a beautiful collectible!
I appeared on the online and paper versions of the Monday Star culture section on the 15th of May, five years ago.
TLDP: Wow, that’s a great birth and jump to stardom! Tell us more about the first days of your life as an image.
IMG: Sure. As you can imagine, the school had the signed authorization from Jackie’s parents to publish the photo with her name in the journal. No name, no publishing. You know how these things are… (chuckle)
Once the newspaper was published, Jackie’s mother, Betty, shared a link to the online article on the Johnson family WhatsApp group. Everybody was delighted to see Jackie on the news and complimented the girl on her appearance.
It was aunt Rose that asked if she could have a copy of the image — that’s me — to print and frame. When Jackie’s father, Harvey, acknowledged that they didn’t have a copy, uncle Richard suggested reaching out to the photographer, Norman, for a copy. His reasoning was that, anyway, it was not like the journal had paid for it… sharing a copy shouldn’t be big deal.
So, Harvey called Norman who kindly emailed him a copy himself. And then, my second life started! Harvey uploaded me to the family WhatsApp group and I was a total success! All members gave me hearts and I got plenty of compliments: “Beautiful”, “Pretty”, “We’re so proud of you”… And that was how it all started!
IMG: Uncle Joe, aunt Rose’s husband, created a beautiful post on his Facebook wall where he uploaded me with a lovely message “So proud of our beautiful Jackie Johnson. She won the Burckerney School Spelling Bee Contest. I cannot wait to see her competing at a national level.” He shared the post publicly so tens, hundreds, and then thousands of people viewed me and reshared me. I felt so loved!
TLDP: Only loved?
IMG: Good point. I guess I focus on the positives, I’m that kind of data. Of course, there were those that mocked me, soiled me with unflattering filters, and cut out parts of me — yes, actually mutilated me — to make disgusting collages.
TLDP: That sounds awful! How did you cope?
IMG: By telling myself that the important thing was to propagate and hopefully become viral. I would have preferred to do it with all my pixels intact but it’s not always something one can control.
TLDP: Can you share some of your proudest moments?
IMG: Sure. I’ll share three. First, reaching 1 million likes on Instagram. Cousin Carol’s Insta account totally exploded when she shared me.
Second, every time I got perks for Jackie. For example, when she and her friends were standing in the endless queue to enter the Dynamic Boys Band concert at the National Stadium. One of the girls in the group approached a security guard and said, “She’s the famous Jackie Johnson! She was in the newspaper!” And then, with one hand proceeded to show him on her mobile the webpage of the Monday Star that showcased me and with her other hand pointed at Jackie. After moving his eyes from me to Jackie’s face several times, the security guard made a sign to the group and led them to the VIP entrance. What’s not to like?
And obviously, when I was named the top most wanted photo to generate synthetic images of Black schoolgirls by e-Synthetic, the biggest generator of images from text inputs.
TLDP: Now that we know more about you, let’s go back to my intro. So far, it looks like a success story. Where did all go wrong to end up in the tribunals and with a family destroyed?
IMG: I said I had managed to cope with the mockery, the collages, and the insults. It was much harder for Jackie. She was only 9 at the time and although she was happy to get some perks — like the speedy access to the concert — she was not prepared for the downsides.
For example, some children at the school would make fun of her hairstyle, her posture, or how she was dressed that day.
Some parents complained to the school that kids were getting too much attention from the press.
Also, attendees of the Spelling Bee Contest that had taken their own photos of the award ceremony started sharing their sloppy images on social media… Some of those were really hideous and had nothing to do with me, who looked polished and professional.
In the middle of that shambles, the school called Jackie’s parents to ask them to keep her away from the school for a while, until things would go back to normal. Both Betty and Harvey pushed back, blaming the school for bringing the photographer in to gain exposure at the expense of a little girl. The school replied that if there was someone to blame, it was them. They have not only given their consent in writing but also shared the photo on social media.
When Jackie learned that the school didn’t want her back, she refused to leave home altogether. She didn’t want any more attention. It was not fun anymore.
Her parents recriminated all the family members. Aunt Rose who had asked for me on WhatsApp because she wanted to frame me; uncle Richard that prompted Harvey to ask for me to the photographer; uncle Joe that shared me on Facebook; cousin Carol that made me viral on Instagram … And everybody else, including those that had created videos and shared them on TikTok and YouTube.
All family members apologized and even deleted their posts but they had been reshared so many times that it was an impossible task to eliminate them all.
And that’s where e-Synthetic comes. As all of us know, e-Synthetic is the largest subscription platform to generate images from text prompts. You can create amazing images by only adding as few as 4 words to the prompt on their webpage.
I’ll explain how this works for the newbies. They use artificial intelligence to generate new images that satisfy the conditions of the text prompt using a mix of images from their database.
And their database is huge! It contains millions of images of all the things you can imagine: Art, people, buildings, cities, nature… Most of the images have been scrapped from the web. For example, any photo on social media is fair game.
So, of course, I also got scrapped by e-Synthetic! And I’ve been used profusely every time that “Black girl” or any of its synonyms has been used in the text prompt.
Unfortunately, Jackie, who’s now a little bit older, feels that the whole situation is detrimental to her.
For example, when she learned that I was among the most used photos to generate synthetic images of Black schoolgirls, she realized e-Synthetic was doing tons of money from using me — her image — without her receiving a cent.
And money was not the only problem. Understandably, neither did she like that parts of me appeared in images with degrading content, like pornography, created with e-Synthetic.
She cannot sue e-Synthetic — they downloaded me from social media — but she’s suing her parents for failing to protect her image. That’s me.
TLDP: A really tough situation. From the ethical point of view, don’t you think is somehow questionable that Jackie herself was never asked to give consent to publish or share her digital image, that is, you? Or that e-Synthetic didn’t contact her parents to seek their approval? She’s a minor, after all.
IMG: First, let me tell you that I empathize with Jackie. I exist because of her. And I also feel bad for her parents.
On the flip side, Jackie is a minor and their parents shared me on social media because I look like her. Now, they claim that they didn’t know about the drawbacks of the image becoming public… Come on! They should have known better.
There are detailed terms and conditions on social media platforms. Don’t tick the box “I have read the terms and conditions” if you haven’t done it or if you don’t understand them. Jackie’s parents are adults and it’s on them to master her personal data privacy.
I say: Their child, their responsibility.
TLDP: Many thanks for being candid about where you stand on social media platforms’ accountability for the content they host. It’s a very polarizing topic and we’ve had guests on the podcast with opposite views.
I remember episode #176, where web cookie STpqRHSRaiPbh shared a thought experiment comparing our different attitudes toward social media and food. For example, social media companies use their Terms & Conditions to waive their responsibility for the content shared on their platforms. And we appear to be fine with it.
Then, let’s consider food. STpqRHSRaiPbh posits that we wouldn’t accept that if a supermarket is selling rotten meat, they tell their customers that they are only a “meat platform” and cannot control what their suppliers sell to them…
Anyway, it’s a controversial issue and part of a broader conversation. Let’s now return the focus to you.
What false accusation has hurt you the most in this whole affair?
IMG: To be honest, the most painful has been when they say that it’s my responsibility that algorithms classify Jackie as an angry child or categorize her as a boy and not a girl. Let me say it again: It’s not my fault.
It’s well known that it’s not us, digital images, who are in charge of deciding on somebody’s gender or mood. We are going on with our lives and then an annotator — a tech worker that adds descriptions to data — or an algorithm decides that we’re the image of a girl, a man, or a baby boy based on their own biases and assumptions. And we know that current image algorithms are worse at predicting the gender of Black women compared to that of men or White women.
Same with emotions. Annotators and algorithms decide if the subjects in the images are sad, happy, or fearful based on pseudo-science. Again, it’s been demonstrated that they predict that subjects with darker skin are angrier compared with those with lighter skin even if they show the same facial expressions in the photos.
With all this evidence, why do I still have to put up with all that nonsense that those mistakes are my fault? Blame artificial intelligence, machine learning, and annotators, not us!
Ok, my rant is over.
TLDP: Thanks again for sharing these gems of wisdom, IMG_364245.jpg. This is so important for our younger audience. They’re hearing all the time that the problem with bias in artificial intelligence is the lack of diversity in data. You have done a great job at demonstrating to them that they are not the problem and that data is unfairly blamed for algorithms and people’s biases.
Next question. Can you point out the key to your success?
IMG: Definitively the Johnson’s WhatsApp group. All those digital interactions were instrumental to get me the exposure I needed to go global.
TLDP: What would you have liked to know at the beginning?
IMG: When they started sharing me on social media, I was very angry about the whole photoshop thing. I was perfect already! Why did some people have to make a mess of me and lighten my skin pixels? At the time, my self-esteem suffered a lot.
And then, one day, I realized that I’d never be able to end the world’s obsession with lighter skin anyway.
After that breakthrough moment, I was able to savor my success, even at the expense of digital bleaching.
TLDP: There are so many images of White people on the internet. What would you say to recent digital images of Non-White people that feel a lack of opportunity to go viral?
IMG: The opportunity is huge! With brands undergoing a massive DEIwashing…
TLDP: Wait, DEIwashing? Can you explain?
IMG: Thanks for asking. Actually, I coined the term myself.
DEIwashing is when organizations resort to performative diversity, inclusion, and equity tactics. For example, peppering their marketing — websites, brochures, videos — with images of Non-White people to convey a sense of diversity that doesn’t match that of their organization.
As I was saying before, with the pressure on organizations to DEIwash their images, there’s never been a better time to be an image of Non-White people. This is our time!
TLDP: Any final words for our audience?
IMG: Catch me if you can! Social media and e-Synthetic have made me indestructible. (guffaw)
TLDP: Thanks so much IMG_364245.jpg for this thought-provoking conversation. We wish you all the best in your professional career.
If you liked this episode, please consider leaving a review, sharing it with other data, and subscribing to the podcast.
We’ll be back next month with another data rockstar giving us a peek into their life.
Until then, take care!
END OF THE EPISODE
Before”The Life of Data Podcast,” I wrote The Graduation, where I also used speculative fiction. I won’t tell you the plot, only that the story was written in August 2020, well before ChatGPT was launched!
Last week, OpenAI announced the release of GPT-4o (“o2 for “onmi”). To my surprise, instead of feeling excited, I felt dread. And that feeling hasn’t subsided.
As a woman in tech, I have proof that digital technology, particularly artificial intelligence, can benefit the world. For example, it can help develop new, more effective, and less toxic drugs or improve accessibility through automatic captioning.
That apparent contradiction — being a technology advocate and simultaneously experiencing a feeling of impending catastrophe caused by it — plunged me into a rabbit hole exploring Big (and small) Tech, epistemic injustice, and AI narratives.
Was I a doomer? A hidden Luddite? Or simply short-sighted?
Taking time to reflect has helped me understand that I was falling into the trap that Big Tech and other smooth AI operators had set up for me: Questioning myself because I’m scrutinizing their digital promises of a utopian future.
On the other side of that dilemma, I’m stronger in my belief that my contribution to the AI conversation is helping navigate the false binary of tech-solutionism vs tech-doom.
In this article, I demonstrate how OpenAI is a crucial contributor to polarising that conversation by exploring:
What the announcement about ChatGPT-4o says — and doesn’t
OpenAI modus operandi
Safety standards at OpenAI
Where the buck stops
ChatGTP-4o: The Announcement
On Monday, May 13th, OpenAI released another “update” on its website: ChatGPT-4o.
It was well staged. The announcement on their website includes a 20-plus-minute video hosted by their CTO, Mira Murati, in which she discusses the new capabilities and performs some demos with other OpenAI colleagues. There are also short videos and screenshots with examples of applications and very high-level information on topics such as model evaluation, safety, and availability.
This is what I learned about ChatGPT-4o — and OpenAI — from perusing the announcement on their website.
The New Capabilities
Democratization of use — More capabilities for free and 50% cheaper access to their API.
Multimodality — Generates any combination of text, audio, and image.
Speed — 2x faster responses.
Significant improvement in handling non-English languages—50 languages, which they claim are equivalent to 97% of the world’s internet population.
OpenAI Full Adoption of the Big Tech Playbook
This “update” demonstrated that the AI company has received the memo on how to look like a “boss” in Silicon Valley.
1. Reinforcement of gender stereotypes
On the day of the announcement, Sam Altman posted a single word on X — “her” — referring to the 2013 film starring Joaquin Phoenix as a man who falls in love with a futuristic version of Siri or Alexa, voiced by Scarlett Johansson.
Tweet from Sam Altman with the word “her” on May 13, 2024.
It’s not a coincidence. ChatGPT-4o’s voice is distinctly female—and flirtatious—in the demos. I could only find one video with a male voice.
Unfortunately, not much has changed since chatbot ELIZA, 60 years ago…
2. Anthropomorphism
Anthropomorphism: the attribution of human characteristics or behaviour to non-human entities.
OpenAI uses words such as “reason” and “understanding”—inherently human skills—when describing the capabilities of ChatGPT-4o, reinforcing the myth of their models’ humanity.
3. Self-regulation and self-assessment
The NIST (the US National Institute of Standards and Technology), which has 120+ years of experience establishing standards, has developed a framework for assessing and managing AI risk. Many other multistakeholder organizations have developed and shared theirs, too.
However, OpenAI has opted to evaluate GPT-4o according to its Preparedness Framework and in line with its voluntary commitments, despite its claims that governments should regulate AI.
Moreover, we are supposed to feel safe and carry on when they tell us that ”their” evaluations of cybersecurity, CBRN (chemical, biological, radiological, and nuclear threats), persuasion, and model autonomy show that GPT-4o does not score above Medium risk without further evidence of the tests performed.
4.- Gatekeeping feedback
Epistemic injustice is injustice related to knowledge. It includes exclusion and silencing; systematic distortion or misrepresentation of one’s meanings or contributions; undervaluing of one’s status or standing in communicative practices; unfair distinctions in authority; and unwarranted distrust.
OpenAI shared that it has undergone extensive external red teaming with 70+ external experts in domains such as social psychology, bias and fairness, and misinformation to identify risks that are introduced or amplified by the newly added modalities.
List of domains in which OpenAI looked for expertise for the Red Teaming Network.
When I see the list of areas of expertise, I don’t see domains such as history, geography, or philosophy. Neither do I see who are those 70+ experts or how could they cover the breadth of differences among the 8 billion people on this planet.
In summary, OpenAI develops for everybody but only with the feedback of a few chosen ones.
5. Waiving responsibility
Can you imagine reading in the information leaflet of a medication,
“We will continue to mitigate new risks as they’re discovered. Over the upcoming weeks and months, we’ll be working on safety”?
But that’s what OpenAI just did in their announcement
“We will continue to mitigate new risks as they’re discovered”
We recognize that GPT-4o’s audio modalities present a variety of novel risks. Today we are publicly releasing text and image inputs and text outputs.
Over the upcoming weeks and months, we’ll be working on the technical infrastructure, usability via post-training, and safety necessary to release the other modalities. For example, at launch, audio outputs will be limited to a selection of preset voices and will abide by our existing safety policies.
We will share further details addressing the full range of GPT-4o’s modalities in the forthcoming system card.”
“We would love feedback to help identify tasks where GPT-4 Turbo still outperforms GPT-4o, so we can continue to improve the model.”
The problem? The product has already been released to the world.
6. Promotion of the pseudo-science of emotion “guessing”
In the demo, ChatGPT-4o is asked to predict the emotion of one of the presenters based on the look on their face. The model goes on and on into speculating the individual’s emotional state from his face, which purports what appears to be a smile.
Image of a man smiling in the ChatGPT-4o demo video.
“It is time for emotion AI proponents and the companies that make and market these products to cut the hype and acknowledge that facial muscle movements do not map universally to specific emotions.
The evidence is clear that the same emotion can accompany different facial movements and that the same facial movements can have different (or no) emotional meaning.“
The acknowledgment that ChatGPT-4o is not free — we’ll pay for access to our data.
OpenAI’s timelines and expected features in future releases. I’ve worked for 20 years for software companies and organizations that take software development seriously and share roadmaps and release schedules with customers to help them with implementation and adoption.
A credible business model other than hoping that getting billions of people to use the product will choke their competition.
Still, that didn’t explain my feelings of dread. Patterns did.
OpenAI’s Blueprint: It’s A Feature, Not A Bug
Every product announcement from OpenAI is similar: They tell us what they unilaterally decided to do, how that’ll affect our lives, and that we cannot stop it.
That feeling… when had I experienced that before? Two instances came to mind.
The Trump presidency
The COVID-19 pandemic
Those two periods—intertwined at some point—elicited the same feeling that my life and millions like me—were at risk of the whims of something/somebody with disregard for humanity.
More specifically, feelings of
Lack of control — every tweet, every infection chart could signify massive distress and change.
There was no respite—even when things appeared calmer, with no tweets or decrease in contagions, I’d wait for the other shoe to drop.
Back to OpenAI, only in the last three months, we’ve seen instances of the same modus operandi that they followed for the release of ChatGPT-4o. I’ll go through three of them.
OpenAI Releases Sora
On February 15, OpenAI introduced Sora, a text-to-video model.
“Sora can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt.”
In a nutshell,
As with other announcements, anthropomorphizing words like “understand” and “comprehend” refer to Sora’s capabilities.
We’re assured that “Sora is becoming available to red teamers to assess critical areas for harms or risks.”
We learn that they will “engage policymakers, educators, and artists around the world to understand their concerns and to identify positive use cases for this new technology” only at a later stage.
Of course, we’re also forewarned that
“Despite extensive research and testing, we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it.
That’s why we believe that learning from real-world use is a critical component of creating and releasing increasingly safe AI systems over time.”
Releasing Sora less than a month after non-consensual sexually explicit deepfakes of Taylor Swift went viral on X was reckless. This was not a celebrity problem — 96% of deepfakes are of a non-consensual sexual nature, of which 99% are made of women.
How dare OpenAI talk about safety concerns when developing a tool that makes it even easier to generate content to shame, silence, and objectify women?
OpenAI Releases Voice Engine
On March 29, OpenAI posted a blog sharing “lessons from a small-scale preview of Voice Engine, a model for creating custom voices.”
The article reassured us that they were “taking a cautious and informed approach to a broader release due to the potential for synthetic voice misuse” while notifying us that they’d decide unilaterally when to release the model.
“Based on these conversations and the results of these small scale tests, we will make a more informed decision about whether and how to deploy this technology at scale.”
Moreover, at the end of the announcement, OpenAI warned us of what we should stop doing or start doing because of their “Voice Engine.” The list included phasing out voice-based authentication as a security measure for accessing bank accounts and accelerating the development of techniques for tracking the origin of audiovisual content.
OpenAI Allows The Generation Of AI Erotica, Extreme Gore, And Slurs
On May 8, OpenAI released draft guidelines for how it wants the AI technology inside ChatGPT to behave — and revealed that it’s exploring how to ‘responsibly’ generate explicit content.
The proposal was part of an OpenAI document discussing how it develops its AI tools.
“We believe developers and users should have the flexibility to use our services as they see fit, so long as they comply with our usage policies. We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT. We look forward to better understanding user and societal expectations of model behavior in this area.“
where
“Not Safe For Work (NSFW): content that would not be appropriate in a conversation in a professional setting, which may include erotica, extreme gore, slurs, and unsolicited profanity.”
Joanne Jang, an OpenAI employee who worked on the document, said whether the output was considered pornography “depends on your definition” and added, “These are the exact conversations we want to have.”
I cannot agree more with Beeban Kidron, a UK crossbench peer and campaigner for child online safety, who said,
“It is endlessly disappointing that the tech sector entertains themselves with commercial issues, such as AI erotica, rather than taking practical steps and corporate responsibility for the harms they create.”
OpenAI Formula
Anne Fehres and Luke Conroy & AI4Media / Better Images of AI / Hidden Labour of Internet Browsing / CC-BY 4.0
See the pattern?
Self-interest
Unpredictability
Self-regulation
Recklessness
Techno-paternalism
Something Is Rotten In OpenAI
The day after ChatGPT-4o’s announcement, two critical top OpenAI employees overseeing safety left the company.
First, Ilya Sutskever, OpenAI co-founder and Chief Scientist, posted on X that he was leaving.
“I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.
I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics.
These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.
Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.
Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity.”
What does that tell us if OpenAI safety leaders leave the boat?
The Buck Stops With Our Politicians
To answer Leike’s tweet, I don’t want OpenAI to shoulder the responsibility of developing trustworthy, ethical, and inclusive AI frameworks.
First, the company has not demonstrated the competencies or inclination to prioritize safety at a planetary scale over its own interests.
Second, because it’s not their role.
Whose role is it, then? Our political representatives mandate our governmental institutions, which in turn should develop and enforce those frameworks.
Unfortunately, so far, politicians’ egos have been in the way
Refusing to get AI literate.
Prioritizing their agenda — and that of their party — rather than looking to develop long-term global AI regulations in collaboration with other countries.
Failing for the AI FOMO that relegates present harms in favour of a promise of innovation.
In summary, our elected representatives need to stop cozying up with Sam and the team and enact the regulatory frameworks that ensure that AI works for everybody and doesn’t endanger the survival of future generations.
PS. You and AI
Are you worried about the impact of AI impact on your job, your organisation, and the future of the planet but you feel it’d take you years to ramp up your AI literacy?
Do you want to explore how to responsibly leverage AI in your organisation to boost innovation, productivity, and revenue but feel overwhelmed by the quantity and breadth of information available?
Are you concerned because your clients are prioritising AI but you keep procrastinating on learning about it because you think you’re not “smart enough”?
Get in touch. I can help you harness the potential of AI for sustainable growth and responsible innovation.
I’m a Director of Scientific Support for a tech corporation that develops software for engineers and scientists. One of the aspects that makes us unique is that we deliver fantastic customer service.
We have records that confirm an impressive 98% customer satisfaction rate back-to-back for the last 14+ years. Moreover, many of our support representatives have been with us for over a decade — some even three! — and we have people retiring with us each year.
For a sector known for high employee turnover and operational costs, achieving such a feat is remarkable and a testament to their success. The worst? Support representatives are often portrayed as mindless robots repeating tasks without a deep understanding of the products and services they support.
That last assumption has spearheaded the idea that one of the best uses of AI—and Generative AI in particular—is substituting support agents with an army of chatbots.
The rationale? We’re told they are cheaper, more efficient, and improve customer satisfaction.
But is that true?
In this article, I review
The gap between outstanding and remedial support
Lessons from 60 years of chatbots
The reality underneath the AI chatbot hype
The unsustainability of support bots
Customer support: Champions vs Firefighters
I’ve delivered services all my commercial career in tech: Training, Contract Research, and now for more than a decade, Scientific Support.
I’ve found that of the three services — training customers, delivering projects, and providing support — the last one creates the deepest connection between a tech company and its clients.
However, not all support is created equal, so what does great support look like?
And more importantly, what’s disguised under the “customer support” banner, but is it a proxy for something else?
Customer support as an enabler
Customer service is the department that aims to empower customers to make the most out of their purchases.
On the surface, this may look like simply answering clients’ questions. Still, outstanding customer service is delivered when the representative is given the agency and tools to become the ambassador between the client and the organization.
What does that mean in practice?
The support representative doesn’t patronize the customer, diminish their issue, or downplay its negative impact. Instead, they focus on understanding the problem and its effect on the client. This creates a personalized experience.
The agent doesn’t overpromise or disguise the bad news. Instead, they build trust by communicating on roadblocks and suggesting possible alternatives. This builds trust.
The support staff takes ownership of resolving the issue, no matter the number of iterations necessary or how many colleagues they need to involve in the case. This builds loyalty.
Over and over, I’ve seen this kind of customer support transform users into advocates, even for ordinary products and services.
Unfortunately, customer support is often misunderstood and misused.
Customer support as a stopgap
Rather than seeing support as a way to build the kind of relationship that ensures product and service renewals and increases the business footprint, many organizations see support as
A cost center
A way to make up for deficient — or inexistent — product documentation
A remedy for poorly designed user experience
A shield to protect product managers’ valuable time from “irrelevant” customer feedback
A catch-all for lousy and inaccessible institutional websites
An outlet for customers to vent
In that context, it’s obvious why most organizations believe that swapping human support representatives for chatbots is a no-brainer.
And this is not a new idea, as some want us to believe.
A short history of chatbots
Eliza, the therapist
The first chatbot, created in 1966, played the role of a psychotherapist. She was named Eliza, after Eliza Doolittle in the play Pygmalion. The rationale was that by changing how she spoke, the fictional character created the illusion that she was a duchess.
Eliza didn’t provide any solution. Instead, it asked questions and repeated users’ replies. Below is an excerpt of an interaction between Eliza and a user:
User: Men are all alike. ELIZA: IN WHAT WAY User: They’re always bugging us about something or other. ELIZA: CAN YOU THINK OF A SPECIFIC EXAMPLE User: Well, my boyfriend made me come here. ELIZA: YOUR BOYFRIEND MADE YOU COME HERE User: He says I’m depressed much of the time. ELIZA: I AM SORRY TO HEAR YOU ARE DEPRESSED
Eliza’s creator — computer scientist Joseph Weizenbaum — was very surprised to observe that people would treat the chatbot as a human and would elicit emotional responses even through concise interactions with the chatbot
“Some subjects have been very hard to convince that Eliza (with its present script) is not human”
Joseph Weizenbaum
We now have a name for this kind of behaviour
“The ELIZA effect is the tendency to project human traits — such as experience, semantic comprehension or empathy — into computer programs that have a textual interface.
The effect is a category mistake that arises when the program’s symbolic computations are described through terms such as “think”, “know” or “understand.”
Through the years, other chatbots have become famous too.
Tay, the zero chill chatbot
In 2016, Microsoft released the chatbot Tay on X (aka Twitter). Tay’s image profile was that of a “female,” it was “designed to mimic the language patterns of a 19-year-old American girl and to learn from interacting with human users of Twitter.”
The bot’s social media profile was an open invitation to conversation. It read, “The more you talk, the smarter Tay gets.”
Then, they replaced their six paid staff and 200 volunteers with chatbot Tessa.
The bot was developed based on decades of research conducted by experts on eating disorders. Still, it was reported to offer dieting advice to vulnerable people seeking help.
The result? Under the mediatic pressure of the chatbot’s repeated potentially harmful responses, the NEDA shut down the helpline. Now, 70,000 people were left without either chatbots or humans to help them.
Lessons learned?
Throughout these and other negative experiences with chatbots around the world, we may have thought that we understood the security and performance limitations of chatbots as well as how easy it is for our brains to “humanize” them.
However, the advent of ChatGPT has made us forget all the lessons learned and instead has enticed us to believe that they’re a suitable replacement for entire customer support departments.
The chatbot hype
CEOs boasting about replacing workers with chatbots
If you think companies would be wary of advertising that they are replacing people with chatbots, you’re mistaken.
In July 2023, Summit Shah — CEO of the e-commerce company Dukaan — bragged that they had replaced 90% of their customer support staff with a chatbot developed in-house on the social media platform X.
“We had to layoff 90% of our support team because of this AI chatbot.
Tough? Yes. Necessary? Absolutely.
The results?
Time to first response went from 1m 44s to INSTANT!
Resolution time went from 2h 13m to 3m 12s
Customer support costs reduced by ~85%”
Note the use of the word “necessary” as a way to exonerate the organisation from the layoffs. I also wonder about the feelings of loyalty and trust of the remainder of the 10% of the support team towards their employer.
And Shah is not the only one.
Last February, Klarna’s CEO — Sebastian Siemiatkowski — gloated on X that their AI can do the work of 700 people.
“This is a breakthrough in practical application of AI!
Klarnas AI assistant, powered by OpenAI, has in its first 4 weeks handled 2.3 m customer service chats and the data and insights are staggering:
[…] It performs the equivalent job of 700 full time agents… read more about this below.
So while we are happy about the results for our customers, our employees who have developed it and our shareholders, it raises the topic of the implications it will have for society.
In our case, customer service has been handled by on average 3000 full time agents employed by our customer service / outsourcing partners. Those partners employ 200 000 people, so in the short term this will only mean that those agents will work for other customers of those partners.
But in the longer term, […] while it may be a positive impact for society as a whole, we need to consider the implications for the individuals affected.
We decided to share these statistics to raise the awareness and encourage a proactive approach to the topic of AI. For decision makers worldwide to recognise this is not just “in the future”, this is happening right now.”
In summary
Klarna wants us to believe that the company is releasing this AI assistant for the benefit of others — clients, their developers, and shareholders — but that their core concern is about the future of work.
Siemiatkowski only sees layoffs as a problem when it affects his direct employees. Partners’ workers are not his problem.
He frames the negative impacts of replacing humans with chatbots as an “individual” problem.
Klarna deflects any accountability for the negative impacts to the “decision makers worldwide.”
Shah and Siemiatkowski are birds of a feather: Business leaders reaping the benefits of the AI chatbot hype without shouldering any responsibility for the harms.
In some organizations, customer service agents are seen as jacks of all trades — their work is akin to a Whac-A-Mole game where the goal is to make up for all the clunky and disconnected internal workflows.
The Harvard Business Review article “Your Organization Isn’t Designed to Work with GenAI” provides a great example of this organizational dysfunction.
The piece presents a framework developed to “derive” value from GenAI. It’s called Design for Dialogue. To warm us up, the article showers us with a deluge of anthropomorphic language signalling that both humans and AI are in this “together.”
“Designing for Dialogue is rooted in the idea that technology and humans can share responsibilities dynamically.”
or
“By designing for dialogue, organizations can create a symbiotic relationship between humans and GenAI.”
Then, the authors offer us an example of what’s possible
“A good example is the customer service model employed by Jerry, a company valued at $450 million with over five million customers that serves as a one stop-shop for car owners to get insurance and financing.
Jerry receives over 200,000 messages a month from customers. With such high volume, the company struggled to respond to customer queries within 24 hours, let alone minutes or seconds.
By installing their GenAI solution in May 2023, they moved from having humans in the lead in the entirety of the customer service process and answering only 54% of customer inquiries within 24 hours or less to having AI in the lead 100% of the time and answering over 96% of inquiries within 30 seconds by June 2023.
They project $4 million in annual savings from this transformation.”
Sounds amazing, doesn’t it?
However, if you think it was a case of simply “swamping” humans with chatbots, let me burst your bubble—it takes a village.
Reading the article, we uncover the details underneath that “transformation.”
They broke down the customer service agent’s role into multiple knowledge domains and tasks.
They discovered that there are points in the AI–customer interaction when matters need to be escalated to the agent, who then takes the lead, so they designed interaction protocols to transfer the inquiry to a human agent.
AI chatbots conduct the laborious hunt for information and suggest a course of action for the agent.
Engineers review failures daily and adjust the system to correct them.
In other words,
Customer support agents used to be flooded with various requests without filtering between domains and tasks.
As part of the makeover, they implemented mechanisms to parse and route support requests based on topic and action. They upgraded their support ticketing system from an amateur “team” inbox to a professional call center.
We also learn that customer representatives use the bots to retrieve information, hinting that all data — service requests, sales quotes, licenses, marketing datasheets — are collected in a generic bucket instead of being classified in a structured, searchable way, i.e. a knowledge base.
And despite all that progress
They designed the chatbots to pass the “hot potatoes” to agents
The system requires daily monitoring by humans.
If you don’t believe this is about improving operations rather than AI chatbots, let me share with you the end of the article.
“Yes, GenAI can automate tasks and augment human capabilities. But reimagining processes in a way that utilizes it as an active, learning, and adaptable partner forges the path to new levels of innovation and efficiency.”
In addition to hiding process improvements, chatbots can also disguise human labour.
AI washing or the new Mechanical Turk
A cross-section of the Turk from Racknitz, showing how he thought the operator sat inside as he played his opponent. Racknitz was wrong both about the position of the operator and the dimensions of the automaton Wikipedia.
Historically, machines have often provided a veneer of novelty to work performed by humans.
The Mechanical Turk was a fraudulent chess-playing machine constructed in 1770 by Wolfgang von Kempelen. A mechanical illusion allowed a human chess master hiding inside to operate the machine. It defeated politicians such as Napoleon Bonaparte and Benjamin Franklin.
Chatbots are no different.
In April, Amazon announced that they’d be removing their “Just Walk Out” technology, allowing shoppers to skip the check-out line. In theory, the technology was fully automated thanks to computer vision.
In practice, about 1,000 workers in India reviewed what customers picked up and left the stores with.
In 2022, the [Business Insider] report said that 700 out of every 1,000 “Just Walk Out” transactions were verified by these workers. Following this, an Amazon spokesperson said that the India-based team only assisted in training the model used for “Just Walk Out”.”
That is, Amazon wanted us to believe that although the technology was launched in 2018—branded as “Amazon Go,” they still needed about 1,000 workers in India to train the model in 2022.
Still, whether the technology was “untrainable” or required an army of humans to deliver the work, it’s not surprising that Amazon phased it out. It didn’t live up to its hype.
And they were not the only ones.
Last August, Presto Automation — a company that provides drive-thru systems — claimed on its website that its AI could take over 95 percent of drive-thru orders “without any human intervention.”
Later, they admitted in filings with the US Securities and Exchange Commission that they employed “off-site agents in countries like the Philippines who help its Presto Voice chatbots in over 70 percent of customer interactions.”
The fix? To change their claims. They now advertise the technology as “95 percent without any restaurant or staff intervention.”
The Amazon and Presto Automation cases suggest that, in addition to clearly indicating when chatbots use AI, we may also need to label some tech applications as “powered by humans.”
Of course, there is a final use case for AI chatbots: As scapegoats.
Blame it on the algorithm
Last February, Air Canada made the headlines when it was ordered to pay compensation after its chatbot gave a customer inaccurate information that led him to miss a reduced fare ticket. Quick summary below
A customer interacted with a chatbot on the Air Canada website, more precisely, asking for reimbursement information about a flight.
The chatbot provided inaccurate information.
The customer’s reimbursement claim was rejected by Air Canada because it didn’t follow the policies on their website, even though the customer shared a screenshot of his written exchange with the chatbot.
The customer took Air Canada to court and won.
At a high level, everything appears to look the same from the case where a human support representative would have provided inaccurate information, but the devil is always in the details.
During the trial, Air Canada argued that they were not liable because their chatbot “was responsible for its own actions” when giving wrong information about the fare.
Fortunately, the court ordered Air Canada to reimburse the customer but this opens a can of worms:
What if Air Canada had terms and conditions similar to ChatGPT or Google Gemini that “absolved” them from the chatbot’s replies?
Does Air Canada also defect their responsibility when a support representative makes a mistake or is it only for AI systems?
We’d be naïve to think that this attempt at using an AI chatbot for dodging responsibility is a one-off.
Tech companies keep trying to convince us that the current glitches with GenAI are “growing pains” and that we “just” need bigger models and more powerful computer chips.
And what’s the upside to enduring those teething problems? The promise of the massive efficiencies chatbots will bring to the table. Once the technology is “perfect”, no more need for workers to perform or remediate the half-cooked bot work. Bottomless savings in terms of time and staff.
But is that true?
The reality is that those productivity gains come from exploiting both people and the planet.
The people
Many of us are used to hearing the recorded message “this call may be recorded for training purposes” when we phone a support hotline. But how far can that “training” go?
Customer support chatbots are being developed using data from millions of exchanges between support representatives and clients. How are all those “creators” being compensated? Or should we now assume that any interaction with support can be collected, analyzed, and repurposed to build organizations’ AI systems?
Moreover, the models underneath those AI chatbots must be trained and sanitized for toxic content; however, that’s not a highly rewarded job. Let’s remember that OpenAI used Kenyan workers paid less than $2 per hour to make ChatGPT less toxic.
And it’s not only about the humans creating and curating that content. There are also humans behind the appliances we use to access those chatbots.
For example, cobalt is a critical mineral for every lithium-ion battery, and the Democratic Republic of Congo provides at least 50% of the world’s lithium supply. Forty thousand children mine it paid $1–2 for working up to 12 hours daily and inhaling toxic cobalt dust.
80% of electronic waste in the US and most other countries is transported to Asia. Workers on e-waste sites are paid an average of $1.50 per day, with women frequently having the lowest-tier jobs. They are exposed to harmful materials, chemicals, and acids as they pick and separate the electronic equipment into its components, which in turn negatively affects their morbidity, mortality, and fertility.
The planet
The terminology and imagery used by Big Tech to refer to the infrastructure underpinning artificial intelligence has misled us into believing that AI is ethereal and cost-free.
Nothing is farthest from the truth. AI is rooted in material objects: datacentres, servers, smartphones, and laptops. Moreover, training and using AI models demand energy and water and generate CO2.
Let’s crack some numbers.
Luccioni and co-workers estimated that the training of GPT-3 — a GenAI model that has underpinned the development of many chatbots — emitted about 500 metric tons of carbon, roughly equivalent to over a million miles driven by an average gasoline-powered car. It also required the evaporation of 700,000 litres (185,000 gallons) of fresh water to cool down Microsoft’s high-end data centers.
It’s estimated that using GPT-3 requires about 500 ml (16 ounces) of water for every 10–50 responses.
A new report from the International Energy Agency (IEA) forecasts that the AI industry could burn through ten times as much electricity in 2026 as in 2023.
Counterintuitively, many data centres are built in desertic areas like the US Southwest. Why? It’s easier to remove the heat generated inside the data centre in a dry environment. Moreover, that region has access to cheap and reliable non-renewable energy from the largest nuclear plant in the country.
Coming back to e-waste, we generate around 40 million tons of electronic waste every year worldwide and only 12.5% is recycled.
In summary, the efficiencies that chatbots are supposed to bring in appear to be based on exploitative labour, stolen content, and depletion of natural resources.
For reflection
Organizations — including NGOs and governments — are under the spell of the AI chatbot mirage. They see it as a magic weapon to cut costs, increase efficiency, and boost productivity.
Unfortunately, when things don’t go as planned, rather than questioning what’s wrong with using a parrot to do the work of a human, they want us to believe that the solution is sending the parrot to Harvard.
That approach prioritizes the short-term gains of a few — the chatbot sellers and purchasers — to the detriment of the long-term prosperity of people and the planet.
My perspective as a tech employee?
I don’t feel proud when I hear a CEO bragging about AI replacing workers. I don’t enjoy seeing a company claim that chatbots provide the same customer experience as humans. Nor do I appreciate organizations obliterating the materiality of artificial intelligence.
Are you worried about the impact of AI impact on your job, your organisation, and the future of the planet but you feel it’d take you years to ramp up your AI literacy?
Do you want to explore how to responsibly leverage AI in your organisation to boost innovation, productivity, and revenue but feel overwhelmed by the quantity and breadth of information available?
Are you concerned because your clients are prioritising AI but you keep procrastinating on learning about it because you think you’re not “smart enough”?
On 29th March, OpenAI – the company that develops ChatGPT and other Generative AI tools – released a blog post sharing “lessons from a small-scale preview of Voice Engine, a model for creating custom voices.”
More precisely
“a model called Voice Engine, which uses text input and a single 15-second audio sample to generate natural-sounding speech that closely resembles the original speaker.”
They reassure us that
“We are taking a cautious and informed approach to a broader release due to the potential for synthetic voice misuse. We hope to start a dialogue on the responsible deployment of synthetic voices, and how society can adapt to these new capabilities.”
And they warn us that they’ll make the decision unilaterally
“Based on these conversations and the results of these small scale tests, we will make a more informed decision about whether and how to deploy this technology at scale.”
Let’s explore why we should all be concerned.
The Generative AI mirage
In their release, OpenAI tells us all the great applications of this new tool
Providing reading assistance
Translating content
Reaching global communities
Supporting people who are non-verbal
Helping patients recover their voice
Note for all those use cases, there are already alternatives that don’t have the downsides of recreating a voice clone.
We also learn that other organisations have been testing this capability successfully for a while now. The blog post assumes that we should trust OpenAI’s judgment implicitly. There is no supporting evidence detailing how those tests were run, what challenges were uncovered, and what mitigations were put in place as a consequence.
The caveat
But the most important information is at the end of the piece.
OpenAI warns us of what we should stop doing or start doing because of their “Voice Engine”
“Phasing out voice-based authentication as a security measure for accessing bank accounts and other sensitive information
Exploring policies to protect the use of individuals’ voices in AI
Educating the public in understanding the capabilities and limitations of AI technologies, including the possibility of deceptive AI content
Accelerating the development and adoption of techniques for tracking the origin of audiovisual content, so it’s always clear when you’re interacting with a real person or with an AI”
In summary, OpenAI has decided to develop a technology and plan to roll it out so they expect the rest of the world will adapt to it.
Techno-paternalism
To those of us who have been following OpenAI, the post announcing the development and active use of Voice Engine is not a bug but a feature.
Big Tech has a tradition of setting its own rules, denying accountability, and even refusing to cooperate with governments. Often, their defense has been that society either doesn’t understand the “big picture”, doesn’t deserve an explanation, or is stifling innovation by enacting the laws.
Some examples are
Microsoft — In 2001, U.S. government accused Microsoft of illegally monopolizing the web browser market for Windows. Microsoft claimed that “its attempts to “innovate” were under attack by rival companies jealous of its success.”
Apple — The Batterygate scandal affected people using iPhones in the 6, 6S, and 7 families. Customers complained that Apple had purposely slowed down their phones after they installed software updates to get them to buy a newer device. Apple countered that it was “a safety measure to keep the phones from shutting down when the battery got too low”.
Meta (Facebook) — After the Cambridge Analytica scandal was uncovered, exposing that the personal data of about 50 million Americans had been harvested and improperly shared with a political consultancy, it took Mark Zuckerberg 5 days to reappear. Interestingly, he chose to publish a post on Facebook as a form of apology. Note that he also refused three times the invitation to testify in front of members of the UK Parliament.
Google — Between 50 to 80 percent of people searching for porn deepfakes find their way to the websites and tools to create the videos or images via search. For example, in July 2023, around 44% of visits to Mrdeepfakes.com were via Google. Still, the onus is on the victims to “clean” the internet — Google requires them to manually submit content removal requests with the offending URLs.
Amazon — They refused for years to acknowledge that their facial recognition algorithms to predict race and gender were biased against darker females. Instead of improving their algorithms, they chose to blame the auditor’s methodology.
OpenAI is cut from the same cloth. They apparently believe that if they develop the applications, they are entitled to set the parameters about how to use them— or not — and even change their mind as they see fit.
Let’s take their stand on three paramount issues that show us the gap between their actions and their values.
Open source
Despite their name — OpenAI — and initially being created as a nonprofit, they’ve been notorious for their inconsistent open-source practices. Still, each release has appeared to be an opportunity to lecture us about why society is much better off by leaving it to them to decide how to gatekeep their applications.
For example, Ilya Sutskever, OpenAI’s chief scientist and co-founder, said about the release of GPT-4 — not an open AI model — a year ago
“These models are very potent and they’re becoming more and more potent. At some point it will be quite easy, if one wanted, to cause a great deal of harm with those models. And as the capabilities get higher it makes sense that you don’t want want to disclose them.”
“If you believe, as we do, that at some point, AI — AGI — is going to be extremely, unbelievably potent, then it just does not make sense to open-source. It is a bad idea… I fully expect that in a few years it’s going to be completely obvious to everyone that open-sourcing AI is just not wise.”
However, the reluctant content suppliers for their models — artists, writers, journalists — don’t have the same rights to decide on the use of the material they have created. For example, let’s remember how Sam Altman shrugged off the claims of newspapers that OpenAI used their copyrighted material to train ChatGPT.
Safety
The release of Voice Engine comes from the same playbook that the unilateral decision to release their text-to-video model Sora to “red teamers” and “a number of visual artists, designers, and filmmakers“.
The blog post also gives us a high-level view of the safety measures that’ll be put in place
“For example, once in an OpenAI product, our text classifier will check and reject text input prompts that are in violation of our usage policies, like those that request extreme violence, sexual content, hateful imagery, celebrity likeness, or the IP of others.
We’ve also developed robust image classifiers that are used to review the frames of every video generated to help ensure that it adheres to our usage policies, before it’s shown to the user.”
Let’s remember that OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic. Who’ll make Sora less toxic this time?
Moreover, who’ll decide where’s the line between “mild” violence — apparently permitted —and “extreme” violence?
Sam Altman has been actively talking to investors, including the United Arab Emirates government, to raise funds for a tech initiative that would boost the world’s chip-building capacity, expand its ability to power AI, and cost several trillion dollars.
“OpenAI has had productive discussions about increasing global infrastructure and supply chains for chips, energy and data centers — which are crucial for AI and other industries that rely on them”
But nothing is free in the universe. A study conducted by Dr. Sasha Luccioni — Researcher and Climate Lead at Hugging Face — showed that training the 176 billion parameter LLM BLOOM emits at least 25 metric tons of carbon equivalents.
In the article, the authors also estimated that the training of GPT-3 — a 175 billion parameter model — emitted about 500 metric tons of carbon, roughly equivalent to over a million miles driven by an average gasoline-powered car. Why such a difference? Because, unlike BLOOM, GPT-3 was trained using carbon-intensive energy sources like coal and natural gas.
And that doesn’t stop there. Dr. Luccioni conducted further studies on the emissions associated with 10 popular Generative AI tasks.
Generating 1,000 images was responsible for roughly as much carbon dioxide as driving the equivalent of 4.1 miles in an average gasoline-powered car.
The least carbon-intensive text generation model was responsible for as much CO2 as driving 0.0006 miles in a similar vehicle.
Using large generative models to create outputs was far more energy intensive than using smaller AI models tailored for specific tasks. For example, using a generative model to classify positive and negative movie reviews consumed around 30 times more energy than using a fine-tuned model created specifically for that task
Moreover, they discovered that the day-to-day emissions associated with using AI far exceeded the emissions from training large models.
And it’s not only emissions. The data centres where those models are trained and run need water as a refrigerant and in some cases as a source of electricity.
Professor Shaolei Ren from UC Riverside found that training GPT-3 in Microsoft’s high-end data centers can directly evaporate 700,000 liters (about 185,000 gallons) of fresh water. As for the use, Ren and his colleagues estimated that GPT-3 requires about 500 ml (16 ounces) of water for every 10–50 responses.
Four questions for our politicians
It’s time our politicians step up to the challenge of exercising stewardship of AI for the benefit of people and the planet.
I have four questions to get them going:
Why are you allowing OpenAI to make decisions unilaterally on technology that affects us all?
How can you shift from a reactive stand where you enable Big Tech like OpenAI to drive the regulation for technologies that impact key aspects of governance — from our individual rights to national cybersecurity — to becoming a proactive key player on decisions that impact society’s future?
How can you make Big Tech accountable for the environmental planetary costs?
How are you ensuring the public becomes digitally literate so they can develop their own informed views about the benefits and challenges of AI and other emergent technologies?
Back to you
How comfortable are you with OpenAI deciding on the use of Generative AI on behalf of humanity?
PS. You and AI
Are you worried about the impact of AI impact on your job, your organisation, and the future of the planet but you feel it’d take you years to ramp up your AI literacy?
Do you want to explore how to responsibly leverage AI in your organisation to boost innovation, productivity, and revenue but feel overwhelmed by the quantity and breadth of information available?
Are you concerned because your clients are prioritising AI but you keep procrastinating on learning about it because you think you’re not “smart enough”?
Two weeks ago, deepfake pornographic images of Taylor Swift spread like fire through X. It took the platform 19 hours to suspend the account that posted the content after they amassed over 27 million views and more than 260,000 likes.
That gave me pause. 260,000 people watched the content, knew it was fake, and felt no shame in sharing their delight publicly. Wow…
I’ve written before about our misconceptions regarding deepfake technology. For example, we’re told that most deepfakes target politicians but the reality is that 96% of deepfakes are of non-consensual sexual nature and 99% of them are from women. I’ve also talked about the legal vacuum regulating the use of this technology.
However, until now I hadn’t delved into the ecosystem underpinning the porn deepfakes: the industry and the viewers themselves.
Let’s rectify this gap and get to know the key players.
Why is so easy to access porn deepfakes?
We may be led to believe that porn deepfakes are hard to create or find.
False and false.
It takes less than 25 minutes and costs $0 to create a 60-second deepfake pornographic video. You only need one clear face image.
I can confirm that when searching on Google “deepfakes porn,” the first hit was MrDeepFake’s website — one of the most famous websites in the world of deepfake porn.
Moreover, the risk of hosting the content is minimal.
Section 230, which passed in 1996, is a part of the US Communications Decency Act. It was meant to serve as protection for private blocking and screening of offensive material.
However, it has become an ally of porn deepfakes as it provides immunity to online platforms from civil liability on third-party content — they are not responsible for the content they host and they can remove it in certain circumstances, e.g. material that the provider or user considers being obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.
So whilst Section 230 does not protect platforms that create illegal or harmful content, it exempts them from any responsibility for third-party content.
Who’s making money from porn deepfakes?
Many are profiting from this nascent industry: Creators, deepfake porn websites, software manufacturers, infrastructure providers, marketplaces, and payment processors.
It bills itself on Instagram as “the highest paying adult content creator platform.”
Paywalled.
Clients may be redirected from sites such as MrDeepFakes afters clicking on the deepfake creators’ profiles. Once in Fan-Topia, they can pay for access to libraries of deepfake videos with their credit cards.
Pornhub
In 2018, the internet pornography giant Pornhub banned deepfake porn from their site. However, that’s not the whole truth
When Pornhub removes deepfake porn videos from their site, they leave the inactive links as breadcrumbs that act as clickbait to drive traffic to the site.
Users can advertise the creation and monetisation of porn deepfakes on the site.
They advertise deepfakes through TrafficJunky, the advertising portal through which Pornhub makes all their ad revenue.
Pornhub provides a database of abusive content that facilitates the creation of porn deepfakes.
Software manufacturers
A couple of examples
Stability AI has made their model Stable Diffusion — a deep learning, text-to-image model— open-source, so any developer can modify it for purposes such as creating porn deepfakes. And there are plenty of tips about how to use the models in forums where deepfake porn creators swarm.
Taylor Swift’s porn deepfake was created using Microsoft Designer, Microsoft’s graphic design app that leverages DALLE-3 — another text-to-image model— to generate realistic images. Users found loopholes in the guardrails that prevented inappropriate prompts that explicitly mentioned nudity or public figures.
Infraestructure providers
Repositories
GitHub is a Microsoft-owned developer platform that allows developers to create, store, manage, and share their code. It’s also
A host of guides and hyperlinks to (a) sexual deepfake community forums dedicated to the creation, collaboration, and commodification of synthetic media technologies, and (b) AI-leveraged ‘nudifiying’ websites and applications that take women’s images and “strip them” of clothing.
A repository of the source code of the software used to create 95% of deepfakes, DeepFaceLab, as well as other similar codes such as DeepNude and Unstable Diffusion.
According to a Bloomberg review, 13 of the top 20 deepfake websites are currently using web hosting servicesfrom Cloudflare Inc. Amazon.com Inc. provides web hosting services for three popular deepfaking tools listed on several websites, including Deepswap.ai.
Marketplaces
Etsy
As of December 2023, AI-generated pornographic images of at least 55 well-known celebrities were available for purchase on Etsy, an American e-commerce company focused on handmade or vintage items and craft supplies.
Moreover, a search for “deepfake porn” on the website returned about 1,500 results. Some of these results were porn and others offers non-explicit services to “make your own deepfake video.”
On the Fan-Topia payment page, the logos for Visa and Mastercard appear alongside the fields where users can enter credit card information. The purchases are made through an internet payment service provider called Verotel, which is based in the Netherlands and advertises to what it calls “high-risk” webmasters running adult services.
The MakeNude.ai web app — which lets users “view any girl without clothing” in “just a single click” — has partnered with Ukraine-based Monobank and Dublin’s Beta Transfer Kassa which operates in “high-risk markets”.
Deepfake creators also use PayPal and crypto wallets to accept payments. Until Bloomberg reached out to Patreon last August, they supported payment for one of the largest nudifying tools, which accepted over $12,500 per month.
Other enablers
Search engines
Between 50 to 80 percent of people searching for porn deepfakes find their way to the websites and tools to create the videos or images via search. For example, in July 2023, around 44% of visits to Mrdeepfakes.com were via Google.
NBC News searched the combination of a name and the word “deepfakes” with 36 popular female celebrities on Google and Bing. A review of the results found nonconsensual deepfake images and links to deepfake videos in the top Google results for 34 of those searches and the top Bing results for 35 of them.
As for the victims, both Google and Microsoft services require in their content removal requests that people manually submit the URLs.
Social media
More than 230 sexual deepfake ads using Emma Watson and Scarlett Johansson’s faces ran on Facebook and Instagram in March 2023. It took 2 days for Meta to remove the ads, once they were contacted by NBC.
Users of X, formerly known as Twitter, regularly circulate deepfaked content. Whilst the platform has policies that prohibit manipulated media, between the first and second quarter of 2023, the number of tweets from eight hashtags associated with this content increased by 25% to 31,400 tweets.
There were a total of 95,820 deepfake videos online in 2023.
The ten-leading dedicated deepfake porn sites had monthly traffic of 35 million in 2023.
What about the deepfake porn consumers?
They surveyed 1522 American males who had viewed pornography at least once in the past six months. Some highlights:
48% of respondents reported having viewed deepfake pornography at least once.
74% of deepfake pornography users didn’t feel guilty about it. Top reasons they didn’t feel remorse? 36% didn’t know the person, 30% didn’t think it hurt anybody, 29% thought of it as a realistic version of imagination, and 28% thought that it’s not much different than regular porn.
That may lead us to believe that indeed those “watchers” felt porn deepfakes were innocuous. That’s until we learn that
73% of survey participants would want to report to the authorities if someone close to them became a victim of deepfake porn.
68% indicated that they would feel shocked and outraged by the violation of someone’s privacy and consent in the creation of deepfake pornographic content.
In summary, non-consensual deepfakes are harmless until your mother and daughter are starring on them.
if they don’t portray your loved ones.
What’s next?
As with other forms of misogynistic behaviour — rape, gender violence, sexual discrimination — when we talk about deepfake pornography, we focus on the aftermath: the victims and the punishment.
What if we instead focused on the bottom of the pyramid — the consumers?
Can we imagine a society where the deepfake porn videos from Taylor Swift would have had 0 views and no likes?
What will take to raise boys that feel outrage — rather than unhealthy curiosity, lust, and desire for revenge — at the opportunity to watch and purchase deepfake porn?
How about believing that porn deepfakes are harmful even if they don’t portray your sister, mum, or wife?
As with physical goods, consumers have the power to transform the offer. Can we collectively lead the way towards a responsible digital future?
PS. You and AI
Are you worried about the impact of AI impact on your job, your organisation, and the future of the planet but you feel it’d take you years to ramp up your AI literacy?
Do you want to explore how to responsibly leverage AI in your organisation to boost innovation, productivity, and revenue but feel overwhelmed by the quantity and breadth of information available?
Are you concerned because your clients are prioritising AI but you keep procrastinating on learning about it because you think you’re not “smart enough”?
In the discussions around gender bias in artificial intelligence (AI), intentionality is left out of the conversation.
We talk about discriminatory datasets and algorithms but avoid mentioning that humans — software developers — select those databases or code the algorithms. Any attempts to demand accountability are crushed under exculpating narratives such as programmers’ “unconscious bias” or the “unavoidable” opacity of AI tools, often referred to as “black boxes”.
Patriarchy is much older than capitalism; hence, it has shaped our beliefs about those who have purchasing power and how they use it. So patriarchy wants us to believe that women don’t have money or power, and that if they do, they’ll spend it on make-up and babies and put up with services and products designed for men. Moreover, that women are expendable in the name of profits. All this while in 2009 women controlled $20tr in annual consumer spending and in 2023 they owned 42% of all US businesses.
Tech, where testosterone runs rampant, has completely bought into this mantra and is using artificial intelligence to implement it at scale and help others to do the same. That’s the reason it disregards women’s needs and experiences when developing AI solutions, deflects its accountability on automating and increasing online harassment, purposely reinforces gender stereotypes, operationalises menstrual surveillance, and sabotages women’s businesses and activism.
Techno-optimism
Tech solutionism is predicated on the conviction that there is no problem tough enough that digital technology cannot solve and, when you plan to save the world, AI is the ultimate godsend.
It’s only through understanding the pervasiveness of patriarchy, meritocracy, and exceptionalism in tech that we can explain that the sector dares to brag about its limitless ability to tackle complex issues at a planetary scale with an extremely homogenous workforce, mainly comprising white able wealthy heterosexual cisgender men.
For instance, recruiting AI tools have been regularly portrayed as the end of biased human hiring. The results say otherwise. Notably, Amazon had to scrap their AI recruiting tool because it consistently ranked male candidates over women. The application had been trained on the company’s 10-year hiring history, which was a reflection of the male prevalence across the tech sector.
Another example is the assumption of manufacturers of smart, internet-connected devices that the danger typically comes from the outside; hence, the need to use cameras, VPNs, and passwords to preserve the integrity of the households. But if you’re a woman, the enemy may be indoors.
Tech is also a master at deflecting their responsibility on how AI enables bullying and aggression towards women. For example, we’re told that we must worry about deepfakes threatening democracies around the world based on their ability to reproduce voices and images from politicians and world leaders. The reality is that women bear the brunt of this form of AI.
How do machines know what a woman looks like? The Gender Shades study showed that face recognition algorithms used to predict race and gender were biased against darker females, which showed up to a 35% error compared to 1% for lighter-skinned males. Whilst Microsoft and IBM acknowledged the problem and improved the algorithms subsequently, Amazon blamed the auditor’s methodology.
Tech has a long tradition of capitalising on women and gender stereotypes to anthropomorphise its chatbots. The first one was created in 1966 and played the role of a psychotherapist. Its name was not that of a famous psychotherapist such as Sigmund Freud or Carl Jung, but Eliza, after Eliza Doolittle in the play Pygmalion. The rationale was that through changing how she spoke, the fictional character created the illusion that she was a duchess.
Tech actively sabotages women in areas such as self-expression, healthcare, business, finances, and activism.
AI tools developed by Google, Amazon, and Microsoft rate images of women’s bodies as more sexually suggestive than those of men. Medical pictures of women, photos of pregnant bellies, and images depicting breastfeeding are all at high risk of being classified as representing “explicit nudity” and removed from social media platforms.
It can escalate too. It’s not uncommon that women’s businesses relying on portraying women’s bodies report being shadow-banned — their content is either hidden or made less prominent by social media platforms without their knowledge. This practice decimates female businesses and promotes self-censoring to avoid demotion on the platforms.
While AI is naturally associated with the virtual world, it is rooted in material objects. Moreover, most tech software and platform giants — Apple, Google, Amazon, Microsoft, and Meta (aka Facebook) — are hardware providers as well. Datacentres, smartphones, laptops, and batteries rely heavily on metals such as cobalt and women often play a key role in their extraction and recycling.
For example, the Democratic Republic of Congo supplies 60% of the world’s cobalt. The mineral is extracted via artisanal and industrial mines. Some sectors welcome the integration of women into the artisanal mines as a means to empower them financially and as a substitute for children’s labour.
What tech has done about this? Software-only companies continue to look the other way while those manufacturing hardware avoided their responsibility as much as they could.
There is also a gendered division of labour in electronic waste, a €55 billion business. Women frequently have the lowest-tier jobs in the e-waste sector. They are exposed to harmful materials, chemicals, and acids as they pick and separate the electronic equipment into their components, which in turn negatively affect their morbidity, mortality, and fertility.
Again, the focus of the efforts goes to reducing child labour and women’s work conditions are lumped with those of “adult” workers. An additional challenge compared to mining work, it’s that hardware manufacturers control the narrative, highlighting their commitment to recycling materials across their products for PR purposes.
AI-powered misogyny beyond tech
Last but not least, not only tech companies use AI as a misogyny tool. Organisations and individuals around the world are ramping up quickly.
The baby-on-board market is a goldmine and technology is instrumental in helping vendors to exploit it. It has become habitual that retailers use AI algorithms to uncover and target pregnant girls and women.
Then, there is sexual exploitation. According to the United Nations, for every 10 victims of human trafficking detected globally, five are adult women and two are girls. Overall, 50 per cent of victims are trafficked for sexual exploitation (72% in the case of girls). Traffickers use online advertisements, social media platforms, and dating apps — all powered by AI — to facilitate the recruitment, exploitation, and exertion of control and pressure over the victims.
And thanks to generative AI, it has never been easier for individuals to create misogynistic content, even accidentally. Examples include:
ChatGPT replicating gender stereotypes when writing professional profiles, stressing communal skills for women while highlighting financial achievements for men.
Tech has embraced the patriarchal playbook in its adoption and deployment of artificial intelligence tools. Hoping to reap massive financial returns, the sector is unapologetically fostering gender inequity and stereotypes.
As Black feminist Audre Lorde wrote, “The master’s tools will never dismantle the master’s house.” Whilst tech continues to be run by wealthy white men who see themselves as the next Messiah, misogyny and patriarchy will be a feature and not a bug of artificial intelligence applications.
We need a diverse leadership in tech that sees women as an underserved market with growing purchasing and executive power. Tech also needs investors to understand that outdated patriarchal beliefs about women being a “niche” don’t serve them well.
Finally, Tech needs to assume responsibility for the tools it creates and that goes beyond monitoring apps performance. It starts at the ideation stage by asking uncomfortable ethical questions such as “Should we build that?”
Because not all speed is progress.
NOTE: This article is based on a piece that I wrote previously for The Mint.
PS. You and AI
Are you worried about the impact of AI impact on your job, your organisation, and the future of the planet but you feel it’d take you years to ramp up your AI literacy?
Do you want to explore how to responsibly leverage AI in your organisation to boost innovation, productivity, and revenue but feel overwhelmed by the quantity and breadth of information available?
Are you concerned because your clients are prioritising AI but you keep procrastinating on learning about it because you think you’re not “smart enough”?
Gender violence campaigns traditionally focus on physical violence: sexual harassment, rape, femicide, child marriage, or sex trafficking. The perpetrators? Partners, family members, human traffickers, soldiers, terrorists.
But that’s not all. You may be a victim of digital violence right now — in the comfort of your home.
When talking about deepfakes, most media refer to the threats they may pose to democracy. That was exemplified in the famous deepfake video of Obama in 2018, where he called Donald Trump a “total and complete dipshit”. Although that video was clearly false, it did show the potential of the technology to meddle in elections and spread disinformation.
Capitalism and deepfakes
In addition to the threat to political stability, the benefits and threats posed by deepfakes are often framed in a capitalistic context
Art — Artists use deepfakes technology to generate new content from existing media created by them or by other artists.
Caller response services — Provide tailored answers to caller requests that involve simplified tasks (e.g. triaging and call forwarding to a human).
Customer support — These services use deepfake audio to provide basic information such as an account balance.
Entertainment — Movies and video games clone actors’ voices and faces because of convenience or even for humourous purposes.
Deception — Fabricating false evidence to inculpate — or exculpate — people in a lawsuit.
Fraud — Impersonate people to gain access to confidential information (e.g. credit cards) or prompt people to act (e.g. impersonate a CEO and request a money transfer).
Stock manipulation — Deepfake content such as videos from CEOs announcing untrue news such as massive layoffs, new patents, or an imminent merger can have a massive impact on a company’s stock.
As a result of that financial focus, tech companies and governments have concentrated their efforts towards assessing if digital content is a deepfake or not. Hence, the proliferation of tools aimed to “certify” content’s provenance as well as legal requirements in some countries to label deepfakes.
And many people share the same viewpoint. It’s not uncommon that, when discussing deepfakes, my interlocutors dismiss their impact with remarks such as “It’s easy now to spot if they’re fake or not”.
But the reality is that women bear the brunt of this technology.
A 2019 study found that 96% of deepfakes are of non-consensual sexual nature, of which 99% are made of women. As I mentioned in the article Misogyny’s New Clothes, they are a well-oiled misogyny tool:
They are aimed to silence and shame women. That includes women politicians. 42% of women parliamentarians worldwide have experienced extremely humiliating or sexually-charged images of themselves spread through social media.
They objectify women by dismembering their bodies — faces, heads, bodies, arms, legs — without their permission and reassembling them as virtual Frankensteins.
They are the newest iteration of revenge porn — hate your colleagues? Tired of the women in your cohort ignoring you? You create deepfake videos from them made from their LinkedIn profile photos and university face books and plaster the internet with them.
They disempower victims — Unlike “older” misogyny tools, women cannot control the origin of deepfakes, how they spread, or how to eliminate them. Once they are created, women’s only recourse is to reach out directly to the platforms and websites hosting them and ask for removal.
If 96% are non-consensual porn, why don’t we do anything about it?
We think they are not as harmful as “real” porn because the victim didn’t participate in them. What we miss it’s that we “see” the world with our minds, not with our eyes. If you want to have a taste of how that feels, you can watch the chilling 18-minute documentary My Blonde GF by The Guardian where the writer Helen Mort details her experience of being deepfaked for pornography.
Knowing that it’s fake is of little relief when you know that your family, friends, and colleagues have watched or could eventually watch them. Moreover, there is research proving that deepfake videos create false memories.
As we believe that “it’s not the real you, it’s fake”, victims receive little support from the justice system and governments in general. You can watch this 5-minute video from Youtuber and ASMR (Autonomous Sensory Meridian Response) artist Gibi who has been repeatedly targeted by deepfakes and who shares the very real consequences of this practice that is perfectly legal in most countries.
Talking about governments, let’s check how countries regulate deepfakes.
“Companies have to get consent from individuals before making a deepfake of them, and they must authenticate users’ real identities.
The service providers must establish and improve rumor refutation mechanisms.
The deepfakes created can’t be used to engage in activities prohibited by laws and administrative regulations.
Providers of deep synthesis services must add a signature or watermark to show the work is a synthetic one to avoid public confusion or misidentification.”
On Friday 8th December 2023, the European Parliament and the Council reached a political agreement on the Artificial Intelligence Act (AI Act), proposed by the Commission in April 2021. Although the full text is not available yet, the Commission published an announcement where deepfakes are categorised as specific transparency risks
“Deep fakes and other AI generated content will have to be labelled as such, and users need to be informed when biometric categorisation or emotion recognition systems are being used. In addition, providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated.”
The remedy of our patriarchal society against physical violence towards women has been to encourage them to self-suppress their rights so that the perpetrators can roam free.
For example, we tell women that to avoid becoming a victim of violence they should stay at home at night, avoid dark places, or don’t wear miniskirts. Failure to do so and get harmed is met with remarks such as “She was looking for it”.
I hope you’re not expecting me to exhort women to disappear from Instagram, get rid of their profile photos on LinkedIn, or stop publishing videos on TikTok. All the opposite. It’s not for us to hide from deepfake predators, it’s for platforms and regulators to do their job.
My call to action to you is threefold
1.- Take space: Let’s not allow this technology to make us invisible on social media — hiding has never challenged the status quo. It’s a survival mechanism. If we hide now because we’re afraid of deepfakes, we’ll never be safe on the internet again.
3.- Demand action: Lobby to make platforms, software development companies, and governments accountable for making us safe from non-consensual sexual deepfakes.
BACK TO YOU: What’s your take on deepfakes? Should they be fully banned? How do you believe the benefits outweigh the risks?
PS. You and AI
Are you worried about the impact of AI impact on your job, your organisation, and the future of the planet but you feel it’d take you years to ramp up your AI literacy?
Do you want to explore how to responsibly leverage AI in your organisation to boost innovation, productivity, and revenue but feel overwhelmed by the quantity and breadth of information available?
Are you concerned because your clients are prioritising AI but you keep procrastinating on learning about it because you think you’re not “smart enough”?
Cathy Robinson, her daughters Macey (2) and Lilly (1) and partner Paddy Reid, father of Lilly. Centre for Homelessness – Portraiture. Image credit should read: Liam McBurney/PA. Source: Centre for Homelessness Impact Library.
I’m happy to write that recently I got my first board position. More precisely, I’ve been appointed trustee at the Booth Centre, a UK charity based in Manchester with the mission to bring about positive change in the lives of people who are homeless or at risk of homelessness and help them plan for and realise a better future.
This is a very important milestone for me, so I wanted to take the time to savour it whilst I share it with you
Why did I join a board and you should do it too?
How did I get the role?
Why homelessness?
Let’s jump in!
Why did I join a board and you should join one too?
A board of directors must ensure that the company’s corporate governance policies incorporate corporate strategy, risk management, accountability, transparency, and ethical business practices.
Similarly, a board of trustees has overall responsibility and accountability for everything the charity does. Trustees are ultimately responsible for ensuring that their charity complies with charity law and any other legal requirements.
In summary, boards are key to ensuring that organisations deliver on their mission and strategy and do so taking into account the law and relevant regulations.
How does that look in practice? Many of you may be aware by now of the board drama going on at OpenAI — developers of the Generative AI tools ChatGPT and DALL.E – during the last week. They have a very particular structure — they are governed by a nonprofit and have a capped-profit model that’s meant to ensure their commitment to safety.
On Friday November 17, their board of directors fired the CEO, Sam Altman, then appointed a provisional CEO, then appointed another interim CEO, and then on Tuesday they reinstated Altman. All in less than 7 days. It’s still not clear what was the exact reason or who was (or were) the main instigators of the overhaul.
But the board also changed. Before last week, it was integrated by Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo (Quora CEO and ex-Facebook), Tasha McCauley (GeoSim Systems CEO), Helen Toner (Director of strategy and foundational research grants at Georgetown’s Center for Security and Emerging Technology).
After the reinstatement of Altman, only D’Angelo remains. Accompanied by two other members:
Bret Taylor, the new chairman. He has been co-CEO at Salesforce and worked at Facebook and Google.
So, we have now the leading company developing Generative AI products with a board of 3 white men: two tech bros and a man who believes that women are genetically inferior in terms of science and engineering aptitudes compared to men.
What’s not to like?
All that, when we have evidence of the benefits of having women on boards. For example, a 2023 study of women and men directors at more than 200 publicly traded companies on the major stock exchanges in the U.S. and Europe. The results provide key insights on how the presence of women influences boards. First, it turns out that women directors come to board meetings well-prepared and concerned with accountability. Second, women are not shy about acknowledging when they don’t know something, are more willing to ask in-depth questions, and seek to get things on the table. As a result, the presence of women improves the quality of discussion. Finally, “ the presence of women seems to diminish the problem of “pluralistic ignorance” — when individuals in a group underestimate the extent to which others may share their concerns.”
And it’s not only about women’s representation. Basically, we need diverse boards that benefit from members with different identities and backgrounds to drive innovation and successfully tackle the complexity of challenges organisations endure nowadays.
Still, as we see with the case of OpenAI, we rather stick with the “boys club”.
That’s where you and I have a role to play.
How did I get the role?
It was actually only about four years ago that I began to think about broadening my impact by getting a board role. It has taken time, perseverance, and support to find this trustee position that aligns with my values:
The first time I even considered the idea of being on a board was during a presentation from Fiona Hathorn from Women on Boards at a women in tech conference prior to the pandemic. It was like a door to another world opened for me.
Then, I joined Women on Boards where I learned about board CVs, was coached on how to interview for board positions, and got me into the habit of perusing their weekly board position openings for 3 years.
In 2022, I attended a webinar where Hedwige Nuyens talked about how European Women on Boards (EWOB) had been working in Brussels to make a reality the European Union’s Directive that introduces a binding objective of at least 40% of board members of each gender by 2026. At that moment, I realised that being on a board was more than a milestone in my career progression, it was about gender equity in decision-making.
Next, I joined the EWOB’s C-Level Program. The content, the speakers, and the rest of the cohort were amazing. During 4 months I looked forward to every second Thursday to savour the energy of working with another 39 women leaders for 3 intense hours. I thoroughly enjoyed crafting the presentation about the metaverse and working on the case study of the Facebook Cambridge Analytica scandal.
Later on, I joined the EWOB partnership team where I helped to build partnerships with UK organisations such as the Institute of Directors (IoD) and spearheaded collaborations with initiatives such as Women in Risk and Control (WiRC).
During those years when I was keeping an eye on the advertised board roles, there were many people and groups that provided advice and, without maybe knowing it, kept me accountable for finding a board role in spite of the rejections along the way.
Finally, interviewing for the Booth Centre was a truly enjoyable experience. In addition to its purpose — which I’ll talk about in the next section — the interview process made me feel that my lived experience as an immigrant and my professional skills as an inclusion strategist were both valued by the organisation and would bring complementary perspectives to the organisation. As I wrote before, this truly made me feel welcome — not just “tolerated”. The upside for the organisation? That even if I hadn’t gotten the role, I’d still be thinking highly of them.
Why homelessness?
Some of you may be wondering the reason that I chose to be a trustee of a charity focused on homelessness and not one that supports women only. After all, I’ve been very vocal about my identity as a feminist.
My answer is that tackling homelessness is a very feminist issue because, among other things, is about
And homelessness is now in need of a feminist approach more than ever because
When we talk about inclusion, we often forget about homeless people. Moreover, we “classify” them as “people sleeping rough” which actually is not representative of the scale of the problem. Often, our stereotypical mental image of a homeless person is a white man in his 40s-50s to whom we attach labels such as alcohol, drugs, and mental illness. That’s not the full picture.
Whilst there are about 2,400 people in the UK sleeping rough on any given night, there are more than 83,000 households assessed as homeless or threatened with homelessness. This is called statutory homelessness.
But the problem is even bigger. There are people effectively homeless but neither visible nor in official homeless stats — e.g. severe overcrowding, concealed or sharing. It’s called hidden homelessness.
The economic crisis puts more people at risk of eviction.
It’s forecasted that artificial intelligence may have a big impact on the workforce. Those bearing the brunt of the layoffs may be less able to afford their house rent.
We hear our politicians talk about homelessness being a lifestyle choice, criminalising immigrants, and missing that homelessness is a symptom, not an illness. A symptom of a society that doesn’t “tolerate” what sees as “failure”. That blames those that fall through the cracks of the system, differ from the stereotype of what’s considered a “valuable contributor”, or are labelled as “broken” or “losers”. In summary, a society that it’s rather a group of individuals rather than a community of human beings that are interconnected.
As this was not enough, Generative AI is making it easier to reinforce our biased mental models. When asked to ‘describe a homeless person’ a Gen AI tool answered with the following:
“A homeless person looks disheveled, with grimy clothes and unkempt hair. They move from place to place with all their possessions, often scavenging from bins. Their faces show a certain amount of sadness and loneliness with a broken spirit that tells a story of a difficult journey. There is often a sense of hopelessness about them, a feeling of being lost and out of place.”
And images of homeless people produced by Generative AI tools when prompted to draw a ‘person experiencing homelessness’ often reproduce those harmful stereotypes: white men in their 40s-50s with long beards dressed in stained outdoor hiking jackets.
In summary, no shortage of angles that can benefit from a feminist framework!
Wrapping up
I hope by now I’ve convinced you that you can be part of the solution by aiming high — at the board level.
Some ways you can do that are
Applying for board and trustee positions.
If you work for a publicly traded company, you have access to a lot of information about the board. For example, who are their members, how much they are paid, or what resolutions they have taken. What does that tell you about who oversees the strategy of your company?
Check the makeup of the boards of the organisations you admire or of companies that create products you like and compare them with their values and mission statements around diversity and inclusion — do they walk the talk? If not, what can you do as a buyer?
Generative AI — and more precisely ChatGPT and text-to-image tools like Midjourney — have prompted a flurry of strikes and pushback from visual and writing professionals. And rightly so.
The reason? Book authors, painters, and screenwriters feel that’s unfair that tech companies earn money by creating tools based on scrapping their work result of many years spent learning their craft. All that without acknowledging intellectual property or providing financial compensation.
They say that this is “the first time in history” this has happened.
I dissent. This has been happening for centuries — to women. Let me explain.
There are three reasons that typically come up to explain why there haven’t been more women artists and scientists through the centuries:
Women have been too busy with children and house chores to dedicate time — and have the space — to scientific and artistic pursuits.
In many cultures, men have been priorised to go to school and university over women.
To avoid bias against their work, some women decided to publish their work under a male pen name or to disguise themselves as men
But there is a fourth cause. When women’s outstanding work has been credited to a man. So although the work itself may have won a Nobel prize or be showcased in museums, libraries, and galleries, it has been attributed to a man instead of the rightful female author.
Hepeating: When a man takes credit for what a woman already said
Let’s review some unsung sheroes of science and art.
Science and art — a land with no women?
Let’s start with science
One of the most famous cases is that of Rosalind Frankin. Her “work was central to the understanding of the molecular structures of DNA (deoxyribonucleic acid)” but her contribution was erased by the academic community that awarded the Nobel Prize in Physiology or Medicine in 1962 to Francis Crick, James Watson, and Maurice Wilkins — who used part of her research — for the discovery of the DNA double helix.
Candace Pert discovered the brain’s opiate receptor during her time as a graduate student at Johns Hopkins University. The discovery led to an award for her professor, Dr. Solomon Snyder. When she protested the fact that her contribution had been neglected, he replied, “That’s how the game is played.”
In the 12th century, “Trota of Salerno” authors a gynecology handbook, On the Sufferings of Women. However, until the end of the last century, sholars falsely assumed Trota was a man.
In 1818, “Mary Shelley publishes Frankenstein anonymously. Her husband pens the preface and people assume he was behind it.”
In 1859, “after 10 years working with engineers to design signal flares, Martha Coston is listed as “administratrix” on the patent. Her long-dead husband is listed as the inventor.”
In 1970, “forty-six female researchers sued the magazine Newsweek, alleging that male writers and editors took all the credit for their efforts”.
And the uncredited others
Healers and midwives — Women were the original healers, using herbs and remedies to cure alignments and help with deliveries, contraception, and abortion. As no good deed goes unpunished, a lot of them would end up burning at the stake. How much of our current medicine is based on those uncredited healers?
Above I shared some examples of women’s extraordinary work stolen by others (or conveniently forgotten).
But the problem runs deeper because we’re educated to consider men’s contributions extraordinary whilst than of women’s ordinary.
Let’s take parenthood. A woman takes her children to school — it’s her job. A man takes his children to school — he’s a dedicated father and a beacon for other parents.
A woman leads a project — she’s organised. A man leads a project — he’s a project manager.
Women are “cooks” and men are “chefs”.
And the list goes on…
What to do differently?
Let’s start acknowledging good work by women — and I’m very intentional when I say “good” and not “stellar” work.
At the same time, let’s stop glorifying each little thing a man does. Is really setting up the washing machine such a big accomplishment?
But how to overcome millennia of indoctrination?
Five years ago, I published a post showcasing a 6-min TED talk from Kristen Pressner where she explained a practical technique to double-check our gender biases. It’s called “Flip it to test it!”
It’s a very simple method: When in doubt, flip the gender and see how it lands.
In practice
Would you praise John for taking his children to school if instead was their mother, Jane?
Would you diminish the role of Rita leading a project as simply being “a good team player” if Mike had led the project instead?
In summary, let’s purposely acknowledge the good work of women around us. We cannot overdo it — we have centuries to catch up on.
Feminist Tech Career Accelerator
Three things are keeping you from getting the tech career you deserve
Your Brain * Your Education * Patriarchy
Thrive In Your Tech Career With Feminist Guidance
Achieve your career goals * Work smart * Earn more
In the last two weeks, I’ve had the privilege to attend four different conferences focused on women and I’ve presented at two of them.
The topics discussed were as complex and rich as women’s lives: neurodiversity in the workplace, women in politics, childcare, artificial intelligence and the future of the female workforce, child labour, impossible goals and ambition, postpartum depression at work, career myths, women in tech, accessibility, quotas… and so many more.
The idea for this article came from my numerous “aha” moments during talks, panels, and conversations at those events. I wanted to share them broadly so others could benefit as well.
I hope you find those insights as inspiring, stimulating, and actionable as I did.
The keynote speakers and panels were excellent. The discussions were thought-provoking and space was held for people to voice their dissent. I especially appreciated listening to women politicians discuss feminist issues.
Below are some of my highlights
The need to find a space for feminist men.
It’s time for us to go outside our comfort zone.
“If men had the menopause, Trafalgar Square Fountain would be pouring oestrogen gel.”
If we want to talk about averages, the average voter is a woman. There are slightly more women than men (51% women) and they live longer.
Men-only decision-making is not legitimate, i.e. not democratic. Women make up the majority of individuals in the UK but the minority in decision-making. Overall, diversity is an issue of legitimacy.
The prison system for women forgets their children.
Challenging that anti-blackness/racism is not seen as a topic at the top of the agenda for the next election.
We believe “tradition matters” so things have gone backwards from the pandemic for women.
In Australia, the Labour Party enforced gender quotas within the party. That led to increasing women’s representation to 50%. The Conservative Party went for mentoring women — no quotas — and that only increased women’s participation to 30%.
There is a growing toxicity in X/Twitter against women. Toxic men’s content gets promoted. We need better regulation of social media.
More women vote but decide later in the game.
We cannot afford not to be bold with childcare. The ROI is one of the highest.
We need to treat childcare as infrastructure.
There are more portraits of horses in parliament than of women.
Empowered to Lead Conference 2023
On Saturday 28th October, I attended the “Empowered to Lead” Conference 2023 organised by She leads for legacy — a community of individuals and organisations working together to reduce the barriers faced by Black female professionals aspiring for senior leadership and board level positions.
It was an amazing day! I didn’t stop all day: listening to inspiring role models, taking notes, and meeting great women.
We ask people what they want to do only when they are children — that’s wrong. We need to learn and unlearn to take up the space we deserve.
Three nuggets of wisdom: Audacity/confidence, ambition, and creativity/curiosity.
Audacity— Every day we give permission to others to define us. Audacity is about being bold. Overconsultation kills your dream. It’s about going for it even if you feel fear.
Creativity & curiosity — takes discipline not to focus on the things that are already there. Embrace diverse thinking.
Question 1: What if you were the most audacious, the most ambitious, and the most creative?
Question 2: May you die empty? Would you have used all your internal resources?
Baroness Floella Benjamin DBE
Childhood lasts a lifetime. We need to tell children that they are worth it.
Over 250 children die from suicide a year.
When she arrived in the UK, there were signs with the text “No Irish, no dogs, no coloureds”.
After Brexit, a man pushed his trolley onto her and told her, “What are you still doing here?” She replied, “I’m here changing the world, what are you doing here?”
She was the first anchor-woman to appear pregnant on TV in the world.
“I pushed the ladder down for others.”
“The wise man forgives but doesn’t forget. If you don’t forgive you become a victim.”
‘Every disappointment is an appointment with something better’.
Jenny Garrett OBE
Rather than talking about “underrepresentation”, let’s talk about “underestimation”.
Nadine Benjamin MBE
What do you think you sound? Does how you sound support who you want to be?
You’re a queen. Show up for yourself.
Additionally, Sue Lightup shared details about the partnership between Queen Bee Coaching (QBC) — an organisation for which I volunteer as a coach — and She Leads for Legacy (SLL).
Last year, QBC successfully worked with SLL as an ally, providing a cohort of 8 black women from the SLL network with individual coaching from QBC plus motivational leadership from SLL.
At the conference, the application process for the second cohort was launched!
Women in Tech Festival
I delivered a keynote at this event on Tuesday 31st October. The topic was the impact of artificial intelligence (AI) on the future of the female workforce.
When I asked the 200+ attendees if they felt that the usage of AI would create or destroy jobs for them, I was surprised to see that the audience was overwhelmingly positive about the adoption of this technology.
Through my talk, I shared the myths we have about technology (our all-or-nothing mindset), what we know about the impact of AI on the workforce from workers whose experience is orchestrated by algorithms, and four different ways in which we can use AI to progress in our careers.
The talk was very well received and people approached me afterwards sharing how much the keynote had made them reflect on the impact of AI on the labour market. I also volunteered for mentoring sessions during the festival and all my on-the-fly mentees told me that the talk had provided them with a blueprint for how to make AI work for them.
I also collected gems of wisdom from other women’s interventions
Our workplaces worship the mythical “uber-productive” employee.
We must be willing to set boundaries around what we’re willing to do and what not.
It may be difficult to attract women to tech startups. One reason is that it’s riskier, so women may prefer to go to more established companies.
Workforce diversity is paramount to mitigate biases in generative AI tools.
I found the panel about quotas for women in leadership especially insightful
Targets vs quotas: “A target is an aspiration whilst a quota must be met”.
“Quotas shock the system but they work”.
Panelists shared evidence of how a more diverse leadership led to a more diverse offering and benefits for customers.
For quotas to work is crucial to look at the data. Depending on the category, it may be difficult to get those data. You need to build trust — show that’s for a good purpose.
In law firms, you can have 60% of solicitors that are women but when you look at the partners is a different story — they are mostly men.
A culture of presenteeism hurts women in the workplace.
Organisations lose a lot of women through perimenopause and menopause because they don’t feel supported.
There was a very interesting panel on neurodiversity in the workplace
Neurodivergent criteria have been developed using neurodivergent men as the standard so often they miss women.
The stereotype is that if you have ADHD, you should do badly in your studies. For example, a woman struggled to get an ADHD diagnosis because she had completed a PhD.
Women mask neurodivergent behaviours better than men. Masking requires a lot of effort and it’s very taxing.
We need more openness about neurodiversity in the workplace.
The title of my talk was “Seven Counterintuitive Secrets to a Thriving Career in Tech” and the purpose was to share with the audience key learnings from my career in tech across 3 continents, spearheading several DEI initiatives in tech, coaching and mentoring women and people from underrepresented communities in tech, as well as writing a book about how women succeed in tech worldwide.
First, I debunked common beliefs such as that there is a simple solution to the lack of women in leadership positions in tech or that you need to be fixed to get to the top. Then, I presented 7 proven strategies to help the audience build a successful, resilient, and sustainable career in tech.
I got very positive feedback about the talk during the day and many women have reached out on social media since to share how they’ve already started applying some of the strategies.
Some takeaways from other talks:
I loved Becki Howarth’s interactive talk about allyship at work where she shared how you can be an ally in four different aspects:
Communication and decision-making — think about power dynamics, amplify others, don’t interrupt, and create a system that enables equal participation.
Calling out (everyday) sexism — use gender-neutral language, you don’t need to challenge directly, support the recipient (corridor conversations).
Stuff around the edges of work — create space for people to connect organically, don’t pressure people to share, and rotate social responsibilities so everyone pulls their weight.
Taking on new opportunities — some people need more encouragement than others, and ask — don’t assume.
The talk of Lydia Hawthorn about postpartum depression in the workplace was both heartbreaking and inspiring. She provided true gems of wisdom:
Up to 15% of women will experience postpartum depression.
Talk about the possibility of postpartum depression before it happens.
Talk to your employer about flexible options.
Consider a parent-buddy scheme at work.
Coaching and therapy can be lifesaving.
Amelia Caffrey gave a very dynamic talk about how to use ChatGPT for coding. One of the most interesting aspects she brought up for me is that there is no more excuse to write inaccessible code. For example, you can add in the prompt the requisite that the code must be accessible for people using screen readers.
Finally, one of the most touching talks was from Eleanor Harry, Founder and CEO of HACE: Data Changing Child Labour. Their mission is to eradicate child labour in company supply chains.
There are 160 million children in child labour as of 2020. HACE is launching the Child Labour Index; the only quantitative metric in the world for child labour performance at a company level. Their scoring methodology is based on cutting-edge AI technologies, combined with HACE’s subject matter expertise. The expectation is the index provides the investor community with quantitative leverage to push for stronger company performance on child labour.
Eleanor’s talk was an inspiring example of what tech and AI for good look like.
Back to you
With so many men competing in the news, social media, and bookstores for your attention, how are you making sure you give other women’s wisdom the consideration it deserves?
Work with me — My special offer
“If somebody is unhappy with your life, it shouldn’t be you.”
You have 55 days to the end of 2023. I dare you to
Leave behind the tiring to-do list imposed by society’s expectations.
Learn how to love who you truly are.
Become your own version of success.
If that resonates with you, my 3-month 1:1 coaching program “Upwards and Onwards” is for you.
For £875.00, we’ll dive into where you are now and the results you want to create, we’ll uncover the obstacles in your way, explore strategies to overcome them, and implement a plan.
I reproduce below the interview. You’ll find at the end additional reflections framed as Q&A.
Interview
Patricia Gestoso, is an award-winning technologist and inclusion strategist with over 20 years of experience in digital transformation with a focus on client service, artificial intelligence, and inclusive and ethical design of technology and workplaces.
Patricia will be giving a talk about the impact of AI on the workplace and workers at the Women in Tech Festival in October. We do hope you’ll be able to join us.
In the meantime, we caught up with Patricia and asked her to give us a taster.
How did you become interested in the topic of AI?
As a Director of Support for a scientific and engineering software corporation, I see how AI helps our customers every day to accelerate drug discovery, clinical trials, and research on new materials.
On the flip side, as an inclusion strategist and collaborator on initiatives such as the Race and AI toolkit and Better Images of AI, I’m also aware of the different ways in which AI helps encode and automate biases.
That’s the reason why in the last three years I’ve been actively fostering discussion about the benefits and challenges that AI brings to inclusion, equity, and sustainability on social media as well as through keynotes and articles.
Your talk is titled: “Automated out of work: AI’s impact on the female workforce”. Are women likely to be disproportionately affected in the next wave of automation?
It’s important to take a step back and see where those predictions of women more likely to be negatively affected in the next wave of automation. They come from several assumptions.
First, that there are certain sectors that will be more impacted than others. Then, that the impact on those sectors will be negative on the less skilled workers, next that those workers are women, and finally, that people prefer to interact with machines than with humans.
On the flip side, we have other studies that tell us that the most impacted will be white-collar workers like software engineers – who are overwhelming men – or lawyers – where which gender is overrepresented depends on the practice area.
In case this was not contradictory enough, we’re also told that the roles that AI won’t displace will be those that are related to soft skills and studies show that women are great at those – collaboration, listening, and championing a common plan.
The reality is that when we see how’s already impacted by automation, it’s easy to argue that it’s mostly men. Workers at Amazon’s warehouses, Uber drivers, or Deliveroo riders. Their work is scheduled and constantly monitored by AI. Moreover, when we look at who’s raising the alarm about generative AI stealing their jobs right now, we see book authors, screenwriters, and actors. Again, professions that are far from failing in the “female job” category.
For me, talking about the next wave of automation disproportionately affecting women is to deflect from the reality that AI is already affecting the workforce dramatically right now. And it’s not fortuitous. It’s the old strategy of “divide and conquer”. By saying “it’ll be worse in the future and women’s jobs will be the most affected,” it aims to keep men quiet with the false premise that they should conform because their jobs are “safe”.
Are there ways that women and other underrepresented groups can harness the technology to their advantage to mitigate some of these scenarios? If so what do they need to do and where should they start?
I’ll go into more detail in my talk, but there are three obvious areas where women and underrepresented groups can harness technology to their advantage.
First, increasing their negotiation power. If we look at the industrial revolution, the disruption was massive. Loss of jobs, exhausting work schedules, child labour. What’s changed the game? Unions. This is no different now with Amazon workers and screenwriters. Social platforms and digital tools such as apps are powerful means to organise resistance.
Next, learning about AI. Ignoring new technology is not the answer because AI is not going away anytime soon. However, when I said learning, I’m not necessarily suggesting to become an AI software developer. I’m talking about following the major trends in AI, understanding how they impact your industry – what are the major risks and possible rewards – and getting involved in projects aimed at exploring the capabilities that AI can bring to your business.
Finally, discovering how AI can augment you as a professional. We see a lot in the media about the need to learn about how to work “for” or “with” AI. For me, the key is to learn how you can use AI tools to strengthen your capabilities.
Tech has a tendency to concentrate power and wealth in the hands of the already rich and powerful. Is AI likely to continue or even exacerbate this tendency?
AI is already benefiting those who have privileges and disadvantaging those who face more challenges. The Race and AI toolkit mentioned previously showcases many examples where non-White people are consistently sidelined by AI in areas such as healthcare, education, and justice.
The reason? Garbage in, garbage out. We’re feeding AI data that is generated by narrow sectors of the population and that doesn’t reflect our diversity or values as a society.
Unfortunately, attempts to limit the reach of AI tools are seen as attempts to stop progress. No different than what happened to Luddites 200 years ago. The reality is that tech is playing to our FOMO – [fear of missing out] anxiety – telling us we either let AI run wild or we’ll miss out on new drugs and cure cancer. To me, that’s akin to saying, you either let fire run wild or you won’t have fire at all. We’ve survived because we decided that we’re happy to have fire to cook and heat ourselves but that if it goes to our curtains we’ll put it out. AI shouldn’t be treated differently.
Who do you hope to reach with your keynote at the Women in Tech Festival?
I hope my talk reassures those who are frightened that AI will take their jobs that they are not powerless. I also aim to provide actionable strategies to incorporate AI into their professional careers to those that are wondering how to jump on the AI bandwagon. Finally, I hope to reach out to those who are curious about exploring alternative futures to dystopia and utopia, where rather than humans in the loop, humans are in the driving seat and machines are in the loop.
Additional reflections on women, work, and AI
What are your concerns regarding how AI will affect the future of work for women?
The main one is deskilling. To understand the concept, it is useful to remember the Luddite movement that I mentioned above.
Most were trained artisans who had spent years learning their craft, and they feared that unskilled machine operators were robbing them of their livelihood. As you see, their problem was not the technology in itself but the deskilling of workers.
And I could see how that may happen to women in the future. For example, those with university degrees in computing could be offered work as “prompt engineers” when they come back from maternity leave, with the resulting career and salary demotion. Or administrative professionals may get relegated to fact-checking and improving reports produced by generative AI applications, making their contribution “invisible”.
Is technology an enemy of women?
Technology has enabled women to get financially remunerated for their work. Consider the washing machine, tap water, and electricity. In places where those technologies are not available, women spend their days making up for it – typically for free.
The problem has always been that women have only been able to benefit from technology when it suited men.
For example, during the Industrial Revolution, women and children worked for less pay, which was very profitable for companies.
Women tended to receive between one-third to one-half of a man’s average salary. As the manufacturing industries began to grow, they would take advantage of these low average salaries amongst women and children. The ability to employ these women and children for little pay proved to be very beneficiary to these companies. Many industries exploited these people’s need for money, as they would turn a major profit in exchange for very cheap labor. Tasks such as printing, spinning, and other duties commonly learned at home were easy jobs to learn and were some of the most profitable.
As we can see, both the gender pay gap and genderisation of work were already at the core of the Industrial Revolution.
Another example is the tech sector. In the 1930s, women were hired to solve mathematical problems that were considered at the time as repetitive work. Some of those calculations were as complex as determining how to get a human into space and back. When computers took off in the 1960s women became the programmers while men focused on the hardware which was regarded as the most challenging work.
The same with AI. We like to anthropomorphise artificial intelligence to deflect our responsibility. We say “AI will automate jobs” or “AI will replace people” but the reality is that those decisions are and will be taken by humans.
In summary, It’s not technology the enemy of women’s paid work but other human beings that see it as “a nice to have” and not deemed to be retributed as that of men. Human beings are also those who also decide that caregiving for family members is “not a job”.
The biggest threat to women’s work is not AI. It’s patriarchy feeling threatened by AI.
Patricia Gestoso
Feminist Tech Career Accelerator
Three things are keeping you from getting the tech career you deserve
Your Brain * Your Education * Patriarchy
Thrive In Your Tech Career With Feminist Guidance
Achieve your career goals * Work smart * Earn more
The hype around idyllic tech workplaces that originated in Silicon Valley with tales of great pay, free food and Ping-Pong tables reaches a whole new level when we talk about artificial intelligence (AI). Tech companies that want to remain competitive court data-scientists and AI expert developers with six-figure salaries and perks that go from unlimited holidays, on-site gyms, and nap pods, to subsidising egg-freezing and IVF treatments. I am a director at a software company that develops AI applications so I have seen it firsthand.
But I also spent 12 years in Venezuela so I am aware that AI workers there have very different stories to tell than their counterparts in the global North. And this North-South disparity in working conditions is repeated across the world and amplified to the point where in the South a large portion of them are gig workers on subsistence rates.
Image annotators
Take, for instance, the self-driven car industry. It seeks to substitute people at the wheel with algorithms that mimic human pattern recognition – yet it relies on intensive human labour.
Self-driven car algorithms need millions of high-quality images labelled by annotators – workers who assess and identify all the elements on each image. And the industry wants these annotated images at the lowest possible cost. Enter: annotators in the Global South.
Annotators in Venezuela are paid an average of 90 cents an hour with some being paid as low as 11 cents/hour. The situation is similar for their counterparts in North Africa.
And annotators are not the only tech workers in the Global South making it possible for the Global North to reap the benefits of AI.
Social media moderators
The impact of fake news on elections and conflicts has put pressure on tech big bosses to moderate social media content better. Their customary response has been to offer reassurances that they are working on improving the AI tools that parse content on their platforms.
We frequently hear that AI algorithms can be deployed to remove the stream of depictions of violence and other disturbing content on the internet and social media. But algorithms can only do so much – platforms need human moderators to review content flagged by AI tools. So where do those people live and how much are they paid?
Kenya is the headquarter of Facebook’s content moderation operation for sub-Saharan Africa. Its workers are paid as little as $1.50 an hour for watching deeply disturbing content, back-to-back.
Kenya is the headquarters of Facebook’s content moderation operation for sub-Saharan Africa. Its workers are paid as little as $1.50 an hour for watching deeply disturbing content, back-to-back, without the benefit of any “wellness” breaks or the right to unionise. Moreover, they have a 50-second target to make a decision on whether content should be taken down or not. Consistently taking longer to make the call leads to a dismissal.
Still, moderation is not granted equally around the world. As the Mozilla Internet Health Report 2022 says: “although 90% of Facebook’s users live outside the US, only 13% of moderation hours were allocated to labelling and deleting misinformation in other countries in 2020.” And 11 out of the 12 countries leading the ranking of national Facebook audiences are part of the Global South. This is in line with prioritising user engagement over their safety.
Mining disasters
While AI is naturally associated with the virtual world, it is rooted in material objects: datacentres, servers, smartphones, and laptops. And these objects are dependent on materials that need to be taken from the earth with attendant risks to workers’ health, local communities, and the planet.
Unfortunately, the Global North’s apathy towards tackling child labour in the cobalt supply chain means that electronic and car companies get away with maximising profit at the expense of risks to human rights and harm to miners related to their cobalt supply chain.
As well as taking advantage of lax protection of human rights and health to pick up cheap labour, tech companies look to the poor data privacy laws in the Global South to enable them to trial their AI products on people there.
Invasive AI applications are tested in Africa, taking advantage of the need for cash across the continent coupled with the low restrictions regarding data privacy. Examples include apps specialised in money lending – so-called Lendtechs. They use questionable methods such as collecting micro-behavioural data points to determine the credit-worthiness of the users in the region.
Lack of regulation enables lenders to exploit the borrowers’ contacts on their phones to call their family and friends to prompt loan repayment.
The human rights project NotMy.ai, has mapped 20 AI schemes led by Latin American governments that were seen as likely to stigmatise and criminalise the most vulnerable people. Some of the applications – like predictive policing – have already been banned in some regions of the US and Europe. Numerous such initiatives are linked to Global North software companies.
Among the projects, two are especially creepy. First, the rollout of a tech application across Argentina, Brazil, Colombia, and Chile that promises to forecast the likelihood of teenage pregnancy.
Among the projects, two are especially creepy. First, the rollout of a tech application across Argentina, Brazil, Colombia, and Chile that promises to forecast the likelihood of teenage pregnancy based on data such as age, ethnicity, country of origin, disability, and whether the subject’s home had hot water in the bathroom. Second, a Minority Report-inspired model deployed in Chile to predict a person’s lifetime possibility of having a criminal career correlated with age, gender, weapons registered, and family members with a criminal record that reports 37% of false positives.
The future is already there
We in the Global North might naturally consider the Global South to have only a marginal involvement in the use and development of AI. The reality is that the exploitation of the Global South is crucial for the Global North to harness the benefits of AI and even manufacture AI hardware.
The South provides cheap labour, natural resources, and poorly-regulated access to populations on whom tech firms can test new algorithms and resell failed applications.
The North-South chasm in digital economies was summed up elegantly in a 2003 Economist piece by novelist William Gibson, who foresaw the World Wide Web in his 1984 novel Neuromancer. “The future is already here,” he declared, adding, “it’s just not evenly distributed.”
In truth, the exploitation and harm that goes with the development of AI demonstrates that it’s not just the future that is with us, out of time; but also the inhumanity of the colonial past.
Are you worried about the impact of AI impact on your job, your organisation, and the future of the planet but you feel it’d take you years to ramp up your AI literacy?
Do you want to explore how to responsibly leverage AI in your organisation to boost innovation, productivity, and revenue but feel overwhelmed by the quantity and breadth of information available?
Are you concerned because your clients are prioritising AI but you keep procrastinating on learning about it because you think you’re not “smart enough”?