Category Archives: Future narratives

Beta Leaders: How Software Development Can Inspire Better Leadership

White man in a dark suit donning a full face mask of a gorilla. He's over a clear background and has one thumb up.
Image by Felix Lichtenfeld from Pixabay.

In 2023, John Allan, former chair of the board of the UK supermarket chain Tesco, quit amid sexual misconduct allegations. He denied the charges. 

He also shared some “pearls of wisdom” following the harassment claims

“A lot of men say to me they’re getting increasingly nervous about working with women, mentoring women.”

The silver lining of the high visibility of Allan’s misconduct allegations and subsequent remarks was that it brought to the surface a long-overdue discussion about how women get less mentoring and sponsorship from men. In particular, men in power.

But to me, the highlight was the article Men, are you nervous working with women? written by three men reflecting on Allan’s assertion that working with women is “complicated.”

More specifically, I had an aha moment reading journalist Nick Curtis’s remark

“I’m happy to admit that I’m a beta male, in a world where men such as Andrew Tate and Boris Johnson — and probably captains of industry like Allan — consider themselves alpha dogs.”

It has been bubbling under my consciousness since I read it and, when recently we discussed the merits of beta software releases at work, two questions formed in my mind

  • What could leadership learn from the beta release process?
  • How would workplaces — and the world — change if we had “beta” leaders?

But first, let’s recap where the term “alpha leadership” comes from and what it means.


Alpha Animals

A dominance hierarchy is a type of social hierarchy that arises when members of animal social groups interact, creating a ranking system. 

A dominant higher-ranking individual is sometimes called an alpha, and a submissive lower-ranking individual is called a beta

Wikipedia


Attributes of alpha animals
in some species are

  • Preferential access to food and other desirable items or activities.
  • Privileged entitlement to sex or mates to the extent that, in some species, only alphas or an alpha pair reproduce. 
  • Some may achieve their status by superior physical strength and aggression but also by being the parent of all in their pack. 

We find examples of alpha species in primates, birds, fish, seals, and canines.

The Alpha Myths

There are many misunderstandings — and lies — about the alpha role in the animal kingdom.

First, there are also female alphas. Examples are lemurs and hyenas. Moreover, every primate group has one alpha male and one alpha female. In bonobos, the alpha at the top of the community is a female.

Second, the idea that wolf packs are led by “alpha” males came from studies of captive wolves in the mid-20th century. New studies of wolves in the wild have found that most wolf packs are families, led by the breeding pair, and bloody duels for supremacy are rare.

Moreover, Frans de Waal, the primatologist and ethologist who popularised the term “alpha male” in his book “Chimpanzee Politics,” was keen on dispelling the misunderstanding that alpha males are not synonymous with bullies. 

  • In his TEDx talk The surprising science of alpha males, de Waal explained that in chimpanzee societies, the smallest male in the group can be the alpha male if he has the right friends and keeps them happy or has female support.
  • It’s very stressful to be an alpha male because you have to defend your position. 
  • They have the obligation to keep the peace in the group and be the most empathic member. Interestingly, alpha male chimpanzees provide security for the lowest-ranking members of the group and comfort for all members. That makes them extremely popular and stabilises their position.
  • The group is usually very supportive of males who are good leaders, and it’s not supportive at all of bullies.

In summary, in the animal kingdom, alpha males benefit from preferential access to females and food and, in primates, and they’re accountable for keeping the peace and comforting their group in times of distress.


Alpha Human Leadership

However, that message has not been transferred to the concept of being an “alpha leader” when talking about humans. Instead, many of us equate the term to being all at once “successful-overachiever-bully-workaholic-male-egocentric-boss”. 

Whilst dictators are automatically labelled as “alpha leaders,” we have many “democratic” leaders that fit the description too. From the tech perspective, figures like Elon Musk, Steve Jobs, Travis Kalanick, and Peter Thiel come to my mind when I think about “alpha male leaders”.

However, given those connotations, we may think most leaders don’t want to be classified as “alpha.” Wrong.

Throughout my career, I’ve met many people proud of claiming their “alpha” status — male and female. The reason? Because the term is so ill-defined it enables leaders to “pick and choose” attributes as they see fit.

And scanning Google doesn’t help clarify matters.

The misogynist Andrew Tate has dubbed himself “high status” and an “alpha male”. He has co-opted this term as his brand to mean “strong and successful men who believe in male supremacy and violence against women.” And it sells.

When “transferring” the alpha animal concept to humans, leadership management and consultancies put the accent on dominance, priority access to essential resources, hierarchy, aggressiveness, and protection from external threats.

The results? Those traits get “beautified” — alpha leaders are perceived as decisive, self-confident, assertive, charismatic, risk-taking, good networkers, and high-achievers. 

The social and behavioural rules of animals can be clearly transferred to leaders in the business world.

“Alpha animals” in the business world is a metaphor used to describe dominant, influential, and highly successful individuals or companies that lead their industry. 

Morgan Phillips Group, Recruitment and Talent Consulting Services


The statistic that “70% of all senior executives are alpha male” is pervasive throughout the internet. 

From coaching services to Harvard Business Review (HBR), everybody appears to quote the number and idolise those “super-humans.” Often, being “alpha” is presented as a “natural” or “inherent” trait.

Highly intelligent, confident, and successful, alpha males represent about 70% of all senior executives. Natural leaders, they willingly take on levels of responsibility most rational people would find overwhelming. 

[…] it’s hard to imagine the modern corporation without alpha leaders.

Harvard Business Review

What’s the problem with alpha leaders then? Their teams!

many of their quintessential strengths can also make alphas difficult to work with. Their self-confidence can appear domineering. Their high expectations can make them excessively critical. Their unemotional style can keep them from inspiring their teams. 

Harvard Business Review


Apparently, if the “beta” people were not so picky, the alpha’s life would be much better…

Female Alpha Leaders

As for female alpha leaders, HBR is skeptical…

In our work with senior executives, we’ve encountered many women who possess some of the traits of the alpha male, but none who possess all of them.

The reasons?

Women can be just as data driven and opinionated as alpha males and can cope with stress equally well, but the vast majority of women place more value on interpersonal relationships and pay closer attention to people’s feelings.

Women at the top are generally comfortable with control and being in charge, but they don’t seek to dominate people and situations as alpha males do. Although equally talented, ambitious, and hardheaded, they often rise to positions of authority by excelling at collaboration, and they are less inclined to resort to intimidation to get what they want.

As we can see, valuing interpersonal relationships, collaboration, and avoiding resorting to intimidation excludes women from that selective club of natural-born alpha leaders.

Alpha Leaders Bottom Line

Coaches and consultants are happy to both venerate and offer help to alpha male leaders to perform even better.

Their solution? “Teach” those leaders to

Admit vulnerability, accept accountability not just for his own work but for others’, connect with his underlying emotions, learn to motivate through a balance of criticism and validation, and become aware of unproductive behavior patterns.

Following that rationale, this implies that 70% of senior executives

  • Don’t admit vulnerability
  • Don’t accept accountability for their team’s work
  • Don’t connect with their emotions
  • Don’t balance criticism and validation
  • And are unaware of their unproductive behaviour patterns

What could go wrong?


Other Leadership Styles

As for the alternatives to alpha male leadership, there have been two main approaches.

The Mutating Leader

Some research suggests that the most effective leaders adapt their style to different circumstances.

For example, using coercive leadership when handling a crisis but adopting a coaching style when developing people for the future.

In theory, it sounds reasonable and many leadership consultancies are making money with it.

In practice, it’s extremely tough to implement. Why?

  • Leaders are human beings and they tend to fall into their most comfortable style.
  • Behavioural science experiments have shown us that having many options may trigger analysis-paralysis rather than better choices. For example, being presented with choosing one among 100 different jam flavours often results in no choice at all. Same with leadership styles.

The Virtuous Leader

The other take has been to develop new leadership models that aim to be more team-focused and where the leaders play a role more akin to facilitators than guides and decision-makers.

That’s the case of servant leadership, “based on the idea that leaders prioritize serving the greater good. Leaders with this style serve their team and organization first. They don’t prioritize their own objectives.”

The problem? 

Those aspirational leadership models are geared towards idealised selfless superheroes. Why?

  • Leaders need incentives like anybody else — asking them to always prioritise the group over themselves can only lead to dissatisfaction and burnout.
  • We don’t like authenticity in leaders—indeed, we may appreciate that our CEO remembers our name and role and shows care when they announce layoffs. But the truth is that if our CEO lost a child and kept bringing it up in meetings for a year, we’d deem them not fit for work and search for a replacement.
  • Democracy serves to a point — when COVID-19 hit, many people looked up to government leaders for guidance. In those uncertain times, “alpha male leaders” used simple messages and authoritarian decisions to feed that need. The fact that former UK Prime Minister Boris Johnson’s t​hree-word slogans about Brexit and the pandemic​ — duly tested by focus groups — epitomised leadership for many people tells us a lot about how democracy is divorced from leadership in our minds.

* * *

What if instead of trying to imperfectly replicate the animal kingdom, we’d look at software development for clues into leadership?

After all, didn’t the “agile” software development methodology take organisations by storm almost a decade ago?


Software Development: Alpha and Beta Versions

For over 20 years, I’ve worked for companies that develop software for scientists, researchers, and engineers, both on-premise and Saas (software-as-a-service).

As in many other software companies, our applications follow a release lifecycle with several distinct stages such as pre-alpha, alpha, beta, and release candidate, before the final version, or “gold”, is released to the public.

I’m sure you noted the mention of “alpha” and “beta” above. But what does that mean in software development?

Pre-alpha refers to the early stages of development, when the software is still being designed and built. 

Alpha testing is the first phase of formal testing, during which the software is tested internally

Beta testing is the next phase, in which the software is tested by a larger group of users, typically outside of the organization that developed it. The beta phase is focused on reducing impacts on users and may include usability testing.

After beta testing, the software may [be] refined and tested further, before the final version is released.

There are critical differences between alpha and beta releases

Alpha software may contain serious errors, and any resulting instability could cause crashes or data loss [and] may not contain all of the features planned for the final version.

A beta phase generally begins when the software is feature-complete but likely to contain several known or unknown bugs.

The focus of beta testing is reducing impacts on users, often incorporating usability testing. [It] is typically the first time that the software is available outside of the organization that developed it. 

So unlike a beta release, an alpha version is not “good enough” to get feedback from users. And that’s crucial difference.

I’ve been part of software releases with and without external beta testing and, invariably, those with external beta releases have produced applications of higher quality. 

Moreover, even an “internal” beta release has delivered valuable insights, providing feedback from the field teams — pre-sales, services, and support.

Whilst this may look like a no-brainer, it’s all the opposite. 

Running a beta testing takes time, effort, and resources. It also requires vulnerability, commitment, collaboration, and belief in the value of the end goal because

  • It takes courage and humility for R&D and product management to put their “baby” — aka buggy application — out there for feedback instead of simply considering that they know what’s best for users.
  • Beta users understand that they’ll spend time performing tests on a non-production application — so they likely won’t be able to use the results — and that even while their input is appreciated, some of their suggestions won’t make it into the final product.
  • R&D has limited resources so they know they’ll have to make tough decisions about the feedback they receive — what will be fixed and implemented versus what will not. And they’ll be accountable for those choices even if they disappoint users.

Not bad for a piece of code, is it?


Beta Leadership

What can leaders learn about what it takes to run a successful software “beta” testing? A lot.

  • Willingness to admit that there are opportunities for improvement.
  • Seeking and valuing external and internal stakeholders’ opinions about key decisions.
  • Learning from feedback.
  • Communicating clearly their expectations about how their teams should contribute to the success of the organisations’ objectives.
  • Transparency about balancing resources, time, and results.
  • Prioritising competing demands to maximise overall benefit.
  • Taking responsibility for the final decisions and — more importantly — the outcome.

What would the world be like if we embraced “beta leadership”? 

Beta Societies

I posit that beta leadership would make patriarchy lose ground.

Men and young boys would find less appealing toxic stereotypes that equate leadership to achieving female submission and degrading others. 

Women would expect leaders to show they value them by finally addressing gender violence, gender pay gap, unpaid care, and bodily autonomy. 

Beta Workplaces

Phenomena such as mansplaining, micromanagement, weaponised incompetence, condescension, authority bias, and the highest-paid person’s opinion (HiPPO) effect are a few of the symptoms of a workplace that worships alpha leadership. Leaders who seek feedback are perceived as fragile and insecure.

With beta leadership, traits such as collaboration and empathy that today are considered “female” and regarded as weaknesses would be embraced as attributes of good leadership.

Teams would trust leaders who seek their opinions to make decisions knowing that those leaders may decide against their recommendations as they take responsibility for the outcomes and communicate clearly in their decision-making process.

Beta Investing

Since 2001, when Barber and Odean published the study “Boys Will Be Boys: Gender, Overconfidence, and Common Stock Investment,” research has consistently produced solid evidence supporting that women are better investors than men.

The reasons? Men rank higher than women in two key areas that lead to their lower performance: overconfidence and overactivity. The former, Barber and Odean posit, leads to the latter.

What would beta investing look like? More prudent and thoughtful.

Which in turn would result in 

  • Less volatile markets
  • Less focus on hype assets
  • More long-term investing

What’s not to like?


Let’s Be More Beta

We’ve been sold lies about leadership:

  • “Evolutionary” arguments defending alpha leadership as the permission to bully, control, and destroy others.
  • Empathy and collaboration disregarded as top leadership skills.
  • Leadership seen as a “natural” trait.

That has given us the government and tech leaders we have:

Overconfident · Toxic · Disrespectful · Patronising · Irresponsible

It’s not working. It’s time for change.

Let’s embrace beta leadership.


PS. I have a gift for you

Your Diagnosis: “Imposter syndrome blocks my professional aspirations.”

My Cure: 9 Proven Practices to Stop Self-Doubt Derailing Your Career.

Patriarchy has tricked you into believing you must be an “expert” if you want to succeed. 

That only perfection can get you to the career you want. 

That if you fail once, the sky will fall.

But we see inspiring female leaders attempting bold feats all the time. 

How do they do it? 

They’ve mastered the art of reframing their self-doubt, inner critic voice, and imposter syndrome so don’t stop them from doing what they want to do. 

And today I’m sharing their secrets with you. 

For free.

Download my actionable guide below

𝟵 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝘀 𝘁𝗼 𝗦𝘁𝗼𝗽 𝗜𝗺𝗽𝗼𝘀𝘁𝗲𝗿 𝗦𝘆𝗻𝗱𝗿𝗼𝗺𝗲 𝗳𝗿𝗼𝗺 𝗗𝗲𝗿𝗮𝗶𝗹𝗶𝗻𝗴 𝗬𝗼𝘂𝗿 𝗖𝗮𝗿𝗲𝗲𝗿.

You’re welcome.

Speculative fiction: The Life of Data Podcast

A laptopogram based on a neutral background and populated by scattered squared portraits, all monochromatic, grouped according to similarity. The groupings vary in size, ranging from single faces to overlapping collections of up to twelve. The facial expressions of all the individuals featured are neutral, represented through a mixture of ages and genders.
Philipp Schmitt & AT&T Laboratories Cambridge / Better Images of AI / Data flock (faces) / CC-BY 4.0

Have you ever thought what happens to your photos circulating on social media? I have and that’s the topic of in my second short story in English in which I used speculative fiction to question the interplay between humans and technology, specifically AI.

In a nutshell, I imagined what the data from the digital portrait of a Black schoolgirl would say about how it moves inside our phones, computers, and networks if it were invited to speak on a podcast.

In a nutshell, I imagined what the data from the digital portrait of a Black schoolgirl would share about how it moves inside our phones, computers, and networks if it was invited to speak on a podcast.

The name of the piece is “The Life of Data Podcast” and it appeared in The Lark Publication, an e-magazine focused on fictional short stories and poetry, in October 2022.

This weekend I realised that I never shared it on my website.

Let’s rectify that.


The Life of Data Podcast

Episode #205: The School Award Portrait

TRANSCRIPT

Welcome to the Life of Data Podcast, the place where we get the hottest data stars to spill the beans about their success in under 10 minutes. This is episode #205 and you’re in for a treat!

We’re with the one and only IMG_364245.jpg; otherwise known as Jackie Johnson’s school award portrait. IMG_364245g.jpg became famous about a month ago when it was featured in the news as the most used image to generate synthetic images of Black schoolgirls. As you all may remember, Jackie’s parents claimed that they never gave consent explicitly and Jackie is now suing their parents for lost revenue.

Let’s get cracking!

The Life of Data Podcast (TLDP): Thanks so much IMG_364245.jpg for joining us today.

IMG_364245g.jpg (IMG): Thanks for inviting me. I’m a fan of the podcast!

TLDP: You’ve been a lot in the news over the last month. Still, we always start our interviews with the same question: How were you born and who’s your creator?

IMG: Let’s start with my creator, Norman Buckley, a photograph for the Monday Star newspaper. I was born when he captured the image of the beautiful 9-year-old Jackie Johnson after winning the spelling bee contest at Burckerney School, classifying her for the National Spelling Bee Competition.

Norman created me with a Canon EOS R5 digital camera on a SanDisk’s 512GB Extreme PRO card — today a beautiful collectible!

I appeared on the online and paper versions of the Monday Star culture section on the 15th of May, five years ago.

TLDP: Wow, that’s a great birth and jump to stardom! Tell us more about the first days of your life as an image.

IMG: Sure. As you can imagine, the school had the signed authorization from Jackie’s parents to publish the photo with her name in the journal. No name, no publishing. You know how these things are… (chuckle)

Once the newspaper was published, Jackie’s mother, Betty, shared a link to the online article on the Johnson family WhatsApp group. Everybody was delighted to see Jackie on the news and complimented the girl on her appearance.

It was aunt Rose that asked if she could have a copy of the image — that’s me — to print and frame. When Jackie’s father, Harvey, acknowledged that they didn’t have a copy, uncle Richard suggested reaching out to the photographer, Norman, for a copy. His reasoning was that, anyway, it was not like the journal had paid for it… sharing a copy shouldn’t be big deal.

So, Harvey called Norman who kindly emailed him a copy himself. And then, my second life started! Harvey uploaded me to the family WhatsApp group and I was a total success! All members gave me hearts and I got plenty of compliments: “Beautiful”, “Pretty”, “We’re so proud of you”… And that was how it all started!

TLDP: We’re holding our breath here, IMG_364245.jpg. Please continue!

IMG: Uncle Joe, aunt Rose’s husband, created a beautiful post on his Facebook wall where he uploaded me with a lovely message “So proud of our beautiful Jackie Johnson. She won the Burckerney School Spelling Bee Contest. I cannot wait to see her competing at a national level.” He shared the post publicly so tens, hundreds, and then thousands of people viewed me and reshared me. I felt so loved!

TLDP: Only loved?

IMG: Good point. I guess I focus on the positives, I’m that kind of data. Of course, there were those that mocked me, soiled me with unflattering filters, and cut out parts of me — yes, actually mutilated me — to make disgusting collages.

TLDP: That sounds awful! How did you cope?

IMG: By telling myself that the important thing was to propagate and hopefully become viral. I would have preferred to do it with all my pixels intact but it’s not always something one can control.

TLDP: Can you share some of your proudest moments?

IMG: Sure. I’ll share three. First, reaching 1 million likes on Instagram. Cousin Carol’s Insta account totally exploded when she shared me.

Second, every time I got perks for Jackie. For example, when she and her friends were standing in the endless queue to enter the Dynamic Boys Band concert at the National Stadium. One of the girls in the group approached a security guard and said, “She’s the famous Jackie Johnson! She was in the newspaper!” And then, with one hand proceeded to show him on her mobile the webpage of the Monday Star that showcased me and with her other hand pointed at Jackie. After moving his eyes from me to Jackie’s face several times, the security guard made a sign to the group and led them to the VIP entrance. What’s not to like?

And obviously, when I was named the top most wanted photo to generate synthetic images of Black schoolgirls by e-Synthetic, the biggest generator of images from text inputs.

TLDP: Now that we know more about you, let’s go back to my intro. So far, it looks like a success story. Where did all go wrong to end up in the tribunals and with a family destroyed?

IMG: I said I had managed to cope with the mockery, the collages, and the insults. It was much harder for Jackie. She was only 9 at the time and although she was happy to get some perks — like the speedy access to the concert — she was not prepared for the downsides.

For example, some children at the school would make fun of her hairstyle, her posture, or how she was dressed that day.

Some parents complained to the school that kids were getting too much attention from the press.

Also, attendees of the Spelling Bee Contest that had taken their own photos of the award ceremony started sharing their sloppy images on social media… Some of those were really hideous and had nothing to do with me, who looked polished and professional.

In the middle of that shambles, the school called Jackie’s parents to ask them to keep her away from the school for a while, until things would go back to normal. Both Betty and Harvey pushed back, blaming the school for bringing the photographer in to gain exposure at the expense of a little girl. The school replied that if there was someone to blame, it was them. They have not only given their consent in writing but also shared the photo on social media.

When Jackie learned that the school didn’t want her back, she refused to leave home altogether. She didn’t want any more attention. It was not fun anymore.

Her parents recriminated all the family members. Aunt Rose who had asked for me on WhatsApp because she wanted to frame me; uncle Richard that prompted Harvey to ask for me to the photographer; uncle Joe that shared me on Facebook; cousin Carol that made me viral on Instagram … And everybody else, including those that had created videos and shared them on TikTok and YouTube.

All family members apologized and even deleted their posts but they had been reshared so many times that it was an impossible task to eliminate them all.

And that’s where e-Synthetic comes. As all of us know, e-Synthetic is the largest subscription platform to generate images from text prompts. You can create amazing images by only adding as few as 4 words to the prompt on their webpage.

I’ll explain how this works for the newbies. They use artificial intelligence to generate new images that satisfy the conditions of the text prompt using a mix of images from their database.

And their database is huge! It contains millions of images of all the things you can imagine: Art, people, buildings, cities, nature… Most of the images have been scrapped from the web. For example, any photo on social media is fair game.

So, of course, I also got scrapped by e-Synthetic! And I’ve been used profusely every time that “Black girl” or any of its synonyms has been used in the text prompt.

Unfortunately, Jackie, who’s now a little bit older, feels that the whole situation is detrimental to her.

For example, when she learned that I was among the most used photos to generate synthetic images of Black schoolgirls, she realized e-Synthetic was doing tons of money from using me — her image — without her receiving a cent.

And money was not the only problem. Understandably, neither did she like that parts of me appeared in images with degrading content, like pornography, created with e-Synthetic.

She cannot sue e-Synthetic — they downloaded me from social media — but she’s suing her parents for failing to protect her image. That’s me.

TLDP: A really tough situation. From the ethical point of view, don’t you think is somehow questionable that Jackie herself was never asked to give consent to publish or share her digital image, that is, you? Or that e-Synthetic didn’t contact her parents to seek their approval? She’s a minor, after all.

IMG: First, let me tell you that I empathize with Jackie. I exist because of her. And I also feel bad for her parents.

On the flip side, Jackie is a minor and their parents shared me on social media because I look like her. Now, they claim that they didn’t know about the drawbacks of the image becoming public… Come on! They should have known better.

There are detailed terms and conditions on social media platforms. Don’t tick the box “I have read the terms and conditions” if you haven’t done it or if you don’t understand them. Jackie’s parents are adults and it’s on them to master her personal data privacy.

I say: Their child, their responsibility.

TLDP: Many thanks for being candid about where you stand on social media platforms’ accountability for the content they host. It’s a very polarizing topic and we’ve had guests on the podcast with opposite views.

I remember episode #176, where web cookie STpqRHSRaiPbh shared a thought experiment comparing our different attitudes toward social media and food. For example, social media companies use their Terms & Conditions to waive their responsibility for the content shared on their platforms. And we appear to be fine with it.

Then, let’s consider food. STpqRHSRaiPbh posits that we wouldn’t accept that if a supermarket is selling rotten meat, they tell their customers that they are only a “meat platform” and cannot control what their suppliers sell to them…

Anyway, it’s a controversial issue and part of a broader conversation. Let’s now return the focus to you.

What false accusation has hurt you the most in this whole affair?

IMG: To be honest, the most painful has been when they say that it’s my responsibility that algorithms classify Jackie as an angry child or categorize her as a boy and not a girl. Let me say it again: It’s not my fault.

It’s well known that it’s not us, digital images, who are in charge of deciding on somebody’s gender or mood. We are going on with our lives and then an annotator — a tech worker that adds descriptions to data — or an algorithm decides that we’re the image of a girl, a man, or a baby boy based on their own biases and assumptions. And we know that current image algorithms are worse at predicting the gender of Black women compared to that of men or White women.

Same with emotions. Annotators and algorithms decide if the subjects in the images are sad, happy, or fearful based on pseudo-science. Again, it’s been demonstrated that they predict that subjects with darker skin are angrier compared with those with lighter skin even if they show the same facial expressions in the photos.

With all this evidence, why do I still have to put up with all that nonsense that those mistakes are my fault? Blame artificial intelligence, machine learning, and annotators, not us!

Ok, my rant is over.

TLDP: Thanks again for sharing these gems of wisdom, IMG_364245.jpg. This is so important for our younger audience. They’re hearing all the time that the problem with bias in artificial intelligence is the lack of diversity in data. You have done a great job at demonstrating to them that they are not the problem and that data is unfairly blamed for algorithms and people’s biases.

Next question. Can you point out the key to your success?

IMG: Definitively the Johnson’s WhatsApp group. All those digital interactions were instrumental to get me the exposure I needed to go global.

TLDP: What would you have liked to know at the beginning?

IMG: When they started sharing me on social media, I was very angry about the whole photoshop thing. I was perfect already! Why did some people have to make a mess of me and lighten my skin pixels? At the time, my self-esteem suffered a lot.

And then, one day, I realized that I’d never be able to end the world’s obsession with lighter skin anyway.

After that breakthrough moment, I was able to savor my success, even at the expense of digital bleaching.

TLDP: There are so many images of White people on the internet. What would you say to recent digital images of Non-White people that feel a lack of opportunity to go viral?

IMG: The opportunity is huge! With brands undergoing a massive DEIwashing…

TLDP: Wait, DEIwashing? Can you explain?

IMG: Thanks for asking. Actually, I coined the term myself.

DEIwashing is when organizations resort to performative diversity, inclusion, and equity tactics. For example, peppering their marketing — websites, brochures, videos — with images of Non-White people to convey a sense of diversity that doesn’t match that of their organization.

As I was saying before, with the pressure on organizations to DEIwash their images, there’s never been a better time to be an image of Non-White people. This is our time!

TLDP: Any final words for our audience?

IMG: Catch me if you can! Social media and e-Synthetic have made me indestructible. (guffaw)

TLDP: Thanks so much IMG_364245.jpg for this thought-provoking conversation. We wish you all the best in your professional career.

If you liked this episode, please consider leaving a review, sharing it with other data, and subscribing to the podcast.

We’ll be back next month with another data rockstar giving us a peek into their life.

Until then, take care!

END OF THE EPISODE


Before”The Life of Data Podcast,” I wrote The Graduation, where I also used speculative fiction. I won’t tell you the plot, only that the story was written in August 2020, well before ChatGPT was launched!

OpenAI’s ChatGPT-4o: The Good, the Bad, and the Irresponsible

A brightly coloured mural with several scenes: people in front of computers seeming stressed, several faces overlaid over each other, squashed emojis, miners digging in front of a huge mountain, a hand holding a lump of coal or carbon, hands manipulating stock charts, women performing tasks on computers, men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone and money, people in a production line.
Clarote & AI4Media / Better Images of AI / AI Mural / CC-BY 4.0

Last week, OpenAI announced the release of GPT-4o (“o2 for “onmi”). To my surprise, instead of feeling excited, I felt dread. And that feeling hasn’t subsided.

As a woman in tech, I have proof that digital technology, particularly artificial intelligence, can benefit the world. For example, it can help develop new, more effective, and less toxic drugs or improve accessibility through automatic captioning.

That apparent contradiction  — being a technology advocate and simultaneously experiencing a feeling of impending catastrophe caused by it — plunged me into a rabbit hole exploring Big (and small) Tech, epistemic injustice, and AI narratives.

Was I a doomer? A hidden Luddite? Or simply short-sighted?

Taking time to reflect has helped me understand that I was falling into the trap that Big Tech and other smooth AI operators had set up for me: Questioning myself because I’m scrutinizing their digital promises of a utopian future.

On the other side of that dilemma, I’m stronger in my belief that my contribution to the AI conversation is helping navigate the false binary of tech-solutionism vs tech-doom. 

In this article, I demonstrate how OpenAI is a crucial contributor to polarising that conversation by exploring:

  • What the announcement about ChatGPT-4o says — and doesn’t 
  • OpenAI modus operandi
  • Safety standards at OpenAI
  • Where the buck stops

ChatGTP-4o: The Announcement

On Monday, May 13th, OpenAI released another “update” on its website: ChatGPT-4o. 

It was well staged. The announcement on their website includes a 20-plus-minute video hosted by their CTO, Mira Murati, in which she discusses the new capabilities and performs some demos with other OpenAI colleagues. There are also short videos and screenshots with examples of applications and very high-level information on topics such as model evaluation, safety, and availability.

This is what I learned about ChatGPT-4o — and OpenAI — from perusing the announcement on their website.

The New Capabilities

  • Democratization of use — More capabilities for free and 50% cheaper access to their API.
  • Multimodality — Generates any combination of text, audio, and image.
  • Speed — 2x faster responses. 
  • Significant improvement in handling non-English languages—50 languages, which they claim are equivalent to 97% of the world’s internet population.

OpenAI Full Adoption of the Big Tech Playbook

This “update” demonstrated that the AI company has received the memo on how to look like a “boss” in Silicon Valley.

1. Reinforcement of gender stereotypes

On the day of the announcement, Sam Altman posted a single word on X — “her” — referring to the 2013 film starring Joaquin Phoenix as a man who falls in love with a futuristic version of Siri or Alexa, voiced by Scarlett Johansson.

Tweet from Sam Altman with the word “her” on May 13, 2024.

It’s not a coincidence. ChatGPT-4o’s voice is distinctly female—and flirtatious—in the demos. I could only find one video with a male voice.

Unfortunately, not much has changed since chatbot ELIZA, 60 years ago…

2. Anthropomorphism

Anthropomorphism: the attribution of human characteristics or behaviour to non-human entities.

OpenAI uses words such as “reason” and “understanding”—inherently human skills—when describing the capabilities of ChatGPT-4o, reinforcing the myth of their models’ humanity.

3. Self-regulation and self-assessment

The NIST (the US National Institute of Standards and Technology), which has 120+ years of experience establishing standards, has developed a framework for assessing and managing AI risk. Many other multistakeholder organizations have developed and shared theirs, too.

However, OpenAI has opted to evaluate GPT-4o according to its Preparedness Framework and in line with its voluntary commitments, despite its claims that governments should regulate AI.

Moreover, we are supposed to feel safe and carry on when they tell us that ”their” evaluations of cybersecurity, CBRN (chemical, biological, radiological, and nuclear threats), persuasion, and model autonomy show that GPT-4o does not score above Medium risk without further evidence of the tests performed.

4.- Gatekeeping feedback

Epistemic injustice is injustice related to knowledge. It includes exclusion and silencing; systematic distortion or misrepresentation of one’s meanings or contributions; undervaluing of one’s status or standing in communicative practices; unfair distinctions in authority; and unwarranted distrust.

Wikipedia

OpenAI shared that it has undergone extensive external red teaming with 70+ external experts in domains such as social psychology, bias and fairness, and misinformation to identify risks that are introduced or amplified by the newly added modalities. 

List of domains in which OpenAI looked for expertise for the Red Teaming Network.

When I see the list of areas of expertise, I don’t see domains such as history, geography, or philosophy. Neither do I see who are those 70+ experts or how could they cover the breadth of differences among the 8 billion people on this planet.

In summary, OpenAI develops for everybody but only with the feedback of a few chosen ones.

5. Waiving responsibility 

Can you imagine reading in the information leaflet of a medication, 

“We will continue to mitigate new risks as they’re discovered. Over the upcoming weeks and months, we’ll be working on safety”?

But that’s what OpenAI just did in their announcement

“We will continue to mitigate new risks as they’re discovered”

We recognize that GPT-4o’s audio modalities present a variety of novel risks. Today we are publicly releasing text and image inputs and text outputs. 

Over the upcoming weeks and months, we’ll be working on the technical infrastructure, usability via post-training, and safety necessary to release the other modalities. For example, at launch, audio outputs will be limited to a selection of preset voices and will abide by our existing safety policies. 

We will share further details addressing the full range of GPT-4o’s modalities in the forthcoming system card.”

Moreover, it invites us to be its beta-testers 

“We would love feedback to help identify tasks where GPT-4 Turbo still outperforms GPT-4o, so we can continue to improve the model.”

The problem? The product has already been released to the world.

6. Promotion of the pseudo-science of emotion “guessing”

In the demo, ChatGPT-4o is asked to predict the emotion of one of the presenters based on the look on their face. The model goes on and on into speculating the individual’s emotional state from his face, which purports what appears to be a smile.

Image of a man smiling in the ChatGPT-4o demo video.

The glitch is that there is a wealth of scientific research debunking the belief that facial expressions reveal emotions. Moreover, scientists have called out AI vendors for profiting from that trope. 

“It is time for emotion AI proponents and the companies that make and market these products to cut the hype and acknowledge that facial muscle movements do not map universally to specific emotions. 

The evidence is clear that the same emotion can accompany different facial movements and that the same facial movements can have different (or no) emotional meaning.“

Prof. Lisa Feldman Barrett, PhD.

Shouldn’t we expect OpenAI to help educate the public about those misconceptions rather than using them as a marketing tool?

What They Didn’t Say, And I Wish They Did

  • Signals of efforts to work with governments to regulate and roll out capabilities/models.
  • Sustainability benchmarks regarding energy efficiency, water consumption, or CO2 emissions.
  • The acknowledgment that ChatGPT-4o is not free — we’ll pay for access to our data.
  • OpenAI’s timelines and expected features in future releases. I’ve worked for 20 years for software companies and organizations that take software development seriously and share roadmaps and release schedules with customers to help them with implementation and adoption. 
  • A credible business model other than hoping that getting billions of people to use the product will choke their competition.

Still, that didn’t explain my feelings of dread. Patterns did.

OpenAI’s Blueprint: It’s A Feature, Not A Bug

Every product announcement from OpenAI is similar: They tell us what they unilaterally decided to do, how that’ll affect our lives, and that we cannot stop it.

That feeling… when had I experienced that before? Two instances came to mind.

  • The Trump presidency
  • The COVID-19 pandemic

Those two periods—intertwined at some point—elicited the same feeling that my life and millions like me—were at risk of the whims of something/somebody with disregard for humanity. 

More specifically, feelings of

  • Lack of control — every tweet, every infection chart could signify massive distress and change.
  • There was no respite—even when things appeared calmer, with no tweets or decrease in contagions, I’d wait for the other shoe to drop.

Back to OpenAI, only in the last three months, we’ve seen instances of the same modus operandi that they followed for the release of ChatGPT-4o. I’ll go through three of them.

OpenAI Releases Sora

On February 15, OpenAI introduced Sora, a text-to-video model. 

“Sora can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt.”

In a nutshell,

  • As with other announcements, anthropomorphizing words like “understand” and “comprehend” refer to Sora’s capabilities.
  • We’re assured that “Sora is becoming available to red teamers to assess critical areas for harms or risks.”
  • We learn that they will “engage policymakers, educators, and artists around the world to understand their concerns and to identify positive use cases for this new technology” only at a later stage.

Of course, we’re also forewarned that 

“Despite extensive research and testing, we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it. 

That’s why we believe that learning from real-world use is a critical component of creating and releasing increasingly safe AI systems over time.”

Releasing Sora less than a month after non-consensual sexually explicit deepfakes of Taylor Swift went viral on X was reckless. This was not a celebrity problem — 96% of deepfakes are of a non-consensual sexual nature, of which 99% are made of women.

How dare OpenAI talk about safety concerns when developing a tool that makes it even easier to generate content to shame, silence, and objectify women?

OpenAI Releases Voice Engine

On March 29, OpenAI posted a blog sharing “lessons from a small-scale preview of Voice Engine, a model for creating custom voices.”

The article reassured us that they were “taking a cautious and informed approach to a broader release due to the potential for synthetic voice misuse” while notifying us that they’d decide unilaterally when to release the model.

“Based on these conversations and the results of these small scale tests, we will make a more informed decision about whether and how to deploy this technology at scale.”

Moreover, at the end of the announcement, ​OpenAI warned us of what we should stop doing or start doing​ because of their “Voice Engine.” The list included phasing out voice-based authentication as a security measure for accessing bank accounts and accelerating the development of techniques for tracking the origin of audiovisual content.

OpenAI Allows The Generation Of AI Erotica, Extreme Gore, And Slurs

On May 8, OpenAI released draft guidelines for how it wants the AI technology inside ChatGPT to behave — and revealed that it’s exploring how to ‘responsibly’ generate explicit content.

The proposal was part of an OpenAI document discussing how it develops its AI tools.

“We believe developers and users should have the flexibility to use our services as they see fit, so long as they comply with our usage policies. We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT. We look forward to better understanding user and societal expectations of model behavior in this area.“

where

“Not Safe For Work (NSFW): content that would not be appropriate in a conversation in a professional setting, which may include erotica, extreme gore, slurs, and unsolicited profanity.”

Joanne Jang, an OpenAI employee who worked on the document, said whether the output was considered pornography “depends on your definition” and added, “These are the exact conversations we want to have.”

I cannot agree more with Beeban Kidron, a UK crossbench peer and campaigner for child online safety, who said, 

“It is endlessly disappointing that the tech sector entertains themselves with commercial issues, such as AI erotica, rather than taking practical steps and corporate responsibility for the harms they create.”

OpenAI Formula

A collage picturing a chaotic intersection filled with reCAPTCHA items like crosswalks, fire hydrants and traffic lights, representing the unseen labor in data labelling.
Anne Fehres and Luke Conroy & AI4Media / Better Images of AI / Hidden Labour of Internet Browsing / CC-BY 4.0

See the pattern?

  • Self-interest
  • Unpredictability
  • Self-regulation
  • Recklessness
  • Techno-paternalism

Something Is Rotten In OpenAI

The day after ChatGPT-4o’s announcement, two critical top OpenAI employees overseeing safety left the company.

First, Ilya Sutskever, OpenAI co-founder and Chief Scientist, posted on X that he was leaving.

Tweet from Ilya Sutskever announcing his departure from OpenAI on May 15.

Later that day, Jan Leike, co-leader with Sutskever of Superalignment and executive at OpenAI, also announced his resignation.

On a thread on X, he said

“I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.

I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics.

These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.

Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.

Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity.”

They are also only the last ones on a list of employees leaving OpenAI in the areas of safety, policy, and governance. 

What does that tell us if OpenAI safety leaders leave the boat?

The Buck Stops With Our Politicians

To answer Leike’s tweet, I don’t want OpenAI to shoulder the responsibility of developing trustworthy, ethical, and inclusive AI frameworks.

First, the company has not demonstrated the competencies or inclination to prioritize safety at a planetary scale over its own interests. 

Second, because it’s not their role. 

Whose role is it, then? Our political representatives mandate our governmental institutions, which in turn should develop and enforce those frameworks. 

Unfortunately, so far, politicians’ egos have been in the way

  • Refusing to get AI literate.
  • Prioritizing their agenda — and that of their party — rather than looking to develop long-term global AI regulations in collaboration with other countries.
  • Failing for the AI FOMO that relegates present harms in favour of a promise of innovation.

In summary, our elected representatives need to stop cozying up with Sam and the team and enact the regulatory frameworks that ensure that AI works for everybody and doesn’t endanger the survival of future generations.

PS. You and AI

  • ​Are you worried about ​the impact of A​I impact ​on your job, your organisation​, and the future of the planet but you feel it’d take you years to ramp up your AI literacy?
  • Do you want to explore how to responsibly leverage AI in your organisation to boost innovation, productivity, and revenue but feel overwhelmed by the quantity and breadth of information available?
  • Are you concerned because your clients are prioritising AI but you keep procrastinating on ​learning about it because you think you’re not “smart enough”?

Get in touch. I can help you harness the potential of AI for sustainable growth and responsible innovation.

AI Chatbots in Customer Support: Breaking Down the Myths

An illustration containing electronical devices that are connected by arm-like structures
Anton Grabolle / Better Images of AI / Human-AI collaboration / CC-BY 4.0

I’m a Director of Scientific Support for a tech corporation that develops software for engineers and scientists. One of the aspects that makes us unique is that we deliver fantastic customer service.

We have records that confirm an impressive 98% customer satisfaction rate back-to-back for the last 14+ years. Moreover, many of our support representatives have been with us for over a decade — some even three! — and we have people retiring with us each year.

For a sector known for high employee turnover and operational costs, achieving such a feat is remarkable and a testament to their success. The worst? Support representatives are often portrayed as mindless robots repeating tasks without a deep understanding of the products and services they support.

That last assumption has spearheaded the idea that one of the best uses of AI—and Generative AI in particular—is substituting support agents with an army of chatbots.

The rationale? We’re told they are cheaper, more efficient, and improve customer satisfaction.

But is that true?

In this article, I review

  • The gap between outstanding and remedial support
  • Lessons from 60 years of chatbots
  • The reality underneath the AI chatbot hype
  • The unsustainability of support bots

Customer support: Champions vs Firefighters

I’ve delivered services all my commercial career in tech: Training, Contract Research, and now for more than a decade, Scientific Support.

I’ve found that of the three services — training customers, delivering projects, and providing support — the last one creates the deepest connection between a tech company and its clients.

However, not all support is created equal, so what does great support look like?

And more importantly, what’s disguised under the “customer support” banner, but is it a proxy for something else?

Customer support as an enabler

Customer service is the department that aims to empower customers to make the most out of their purchases.

On the surface, this may look like simply answering clients’ questions. Still, outstanding customer service is delivered when the representative is given the agency and tools to become the ambassador between the client and the organization.

What does that mean in practice?

  • The support representative doesn’t patronize the customer, diminish their issue, or downplay its negative impact. Instead, they focus on understanding the problem and its effect on the client. This creates a personalized experience.
  • The agent doesn’t overpromise or disguise the bad news. Instead, they build trust by communicating on roadblocks and suggesting possible alternatives. This builds trust.
  • The support staff takes ownership of resolving the issue, no matter the number of iterations necessary or how many colleagues they need to involve in the case. This builds loyalty.

Over and over, I’ve seen this kind of customer support transform users into advocates, even for ordinary products and services.

Unfortunately, customer support is often misunderstood and misused.

Customer support as a stopgap

Rather than seeing support as a way to build the kind of relationship that ensures product and service renewals and increases the business footprint, many organizations see support as

  • A cost center
  • A way to make up for deficient — or inexistent — product documentation
  • A remedy for poorly designed user experience
  • A shield to protect product managers’ valuable time from “irrelevant” customer feedback
  • A catch-all for lousy and inaccessible institutional websites
  • An outlet for customers to vent

In that context, it’s obvious why most organizations believe that swapping human support representatives for chatbots is a no-brainer.

And this is not a new idea, as some want us to believe.

A short history of chatbots 

Eliza, the therapist

​The first chatbot, created in 1966, played the role of a psychotherapist. She was named Eliza, after Eliza Doolittle in the play Pygmalion. The rationale was that by changing how she spoke, the fictional character created the illusion that she was a duchess.

Eliza didn’t provide any solution. Instead, it asked questions and repeated users’ replies. Below is an excerpt of an interaction between Eliza and a user:

User: Men are all alike.
ELIZA: IN WHAT WAY
User: They’re always bugging us about something or other.
ELIZA: CAN YOU THINK OF A SPECIFIC EXAMPLE
User: Well, my boyfriend made me come here.
ELIZA: YOUR BOYFRIEND MADE YOU COME HERE
User: He says I’m depressed much of the time.
ELIZA: I AM SORRY TO HEAR YOU ARE DEPRESSED

Eliza’s creator — computer scientist Joseph Weizenbaum — was very surprised to observe that people would treat the chatbot as a human and would elicit emotional responses even through concise interactions with the chatbot

“Some subjects have been very hard to convince that Eliza (with its present script) is not human” 

Joseph Weizenbaum

We now have a name for this kind of behaviour

​“The ELIZA effect is the tendency to project human traits — such as experience, semantic comprehension or empathy — into computer programs that have a textual interface.

​The effect is a category mistake that arises when the program’s symbolic computations are described through terms such as “think”, “know” or “understand.”

Through the years, other chatbots have become famous too.

Tay, the zero chill chatbot

In 2016, Microsoft released the chatbot Tay on X (aka Twitter). Tay’s image profile was that of a “female,” it was “designed to mimic the language patterns of a 19-year-old American girl and to learn from interacting with human users of Twitter.”

The bot’s social media profile was an open invitation to conversation. It read, “The more you talk, the smarter Tay gets.”

Tay’s Twitter page Microsoft.

What could go wrong? Trolls. 

What could go wrong? Trolls.

They “taught” Tay racist and sexually charged content that the chatbot adopted. For example

“bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we’ve got.”

After several trials to “fix” Tay, the chatbot was shut down seven days later.

Chatbot disaster at the NGO

The helpline of the US National Eating Disorder Association (NEDA) served nearly 70,000 people and families in 2022.

Then, they replaced their six paid staff and 200 volunteers with chatbot Tessa.

The bot was developed based on decades of research conducted by experts on eating disorders. Still, it was reported to offer dieting advice to vulnerable people seeking help.

The result? Under the mediatic pressure of the chatbot’s repeated potentially harmful responses, the NEDA shut down the helpline. Now, 70,000 people were left without either chatbots or humans to help them.

Lessons learned?

Throughout these and other negative experiences with chatbots around the world, we may have thought that we understood the security and performance limitations of chatbots as well as how easy it is for our brains to “humanize” them.

However, the advent of ChatGPT has made us forget all the lessons learned and instead has enticed us to believe that they’re a suitable replacement for entire customer support departments.

The chatbot hype

CEOs boasting about replacing workers with chatbots

If you think companies would be wary of advertising that they are replacing people with chatbots, you’re mistaken.

In July 2023, Summit Shah — CEO of the e-commerce company Dukaan — bragged that they had replaced 90% of their customer support staff with a chatbot developed in-house on the social media platform X.

We had to layoff 90% of our support team because of this AI chatbot.

Tough? Yes. Necessary? Absolutely.

The results?

Time to first response went from 1m 44s to INSTANT!

Resolution time went from 2h 13m to 3m 12s

Customer support costs reduced by ~85%

Note the use of the word “necessary” as a way to exonerate the organisation from the layoffs. I also wonder about the feelings of loyalty and trust of the remainder of the 10% of the support team towards their employer.

And Shah is not the only one.

Last February, Klarna’s CEO — Sebastian Siemiatkowski — gloated on X that their AI can do the work of 700 people.

“This is a breakthrough in practical application of AI! 

Klarnas AI assistant, powered by OpenAI, has in its first 4 weeks handled 2.3 m customer service chats and the data and insights are staggering: 

[…] It performs the equivalent job of 700 full time agents… read more about this below. 

So while we are happy about the results for our customers, our employees who have developed it and our shareholders, it raises the topic of the implications it will have for society. 

In our case, customer service has been handled by on average 3000 full time agents employed by our customer service / outsourcing partners. Those partners employ 200 000 people, so in the short term this will only mean that those agents will work for other customers of those partners. 

But in the longer term, […] while it may be a positive impact for society as a whole, we need to consider the implications for the individuals affected. 

We decided to share these statistics to raise the awareness and encourage a proactive approach to the topic of AI. For decision makers worldwide to recognise this is not just “in the future”, this is happening right now.”

In summary

  • Klarna wants us to believe that the company is releasing this AI assistant for the benefit of others — clients, their developers, and shareholders — but that their core concern is about the future of work.
  • Siemiatkowski only sees layoffs as a problem when it affects his direct employees. Partners’ workers are not his problem.
  • He frames the negative impacts of replacing humans with chatbots as an “individual” problem.
  • Klarna deflects any accountability for the negative impacts to the “decision makers worldwide.”

Shah and Siemiatkowski are birds of a feather: Business leaders reaping the benefits of the AI chatbot hype without shouldering any responsibility for the harms.

When chatbots disguise process improvements

A brightly coloured illustration which can be viewed in any direction. It has several scenes within it: people in front of computers seeming stressed, a number of faces overlaid over each other, squashed emojis and other motifs.
Clarote & AI4Media / Better Images of AI / User/Chimera / CC-BY 4.0

In some organizations, customer service agents are seen as jacks of all trades — their work is akin to a Whac-A-Mole game where the goal is to make up for all the clunky and disconnected internal workflows.

The Harvard Business Review article “Your Organization Isn’t Designed to Work with GenAI” provides a great example of this organizational dysfunction.

The piece presents a framework developed to “derive” value from GenAI. It’s called Design for Dialogue. To warm us up, the article showers us with a deluge of anthropomorphic language signalling that both humans and AI are in this “together.”

“Designing for Dialogue is rooted in the idea that technology and humans can share responsibilities dynamically.”

or

“By designing for dialogue, organizations can create a symbiotic relationship between humans and GenAI.

Then, the authors offer us an example of what’s possible

A good example is the customer service model employed by Jerry, a company valued at $450 million with over five million customers that serves as a one stop-shop for car owners to get insurance and financing. 

Jerry receives over 200,000 messages a month from customers. With such high volume, the company struggled to respond to customer queries within 24 hours, let alone minutes or seconds. 

By installing their GenAI solution in May 2023, they moved from having humans in the lead in the entirety of the customer service process and answering only 54% of customer inquiries within 24 hours or less to having AI in the lead 100% of the time and answering over 96% of inquiries within 30 seconds by June 2023.

They project $4 million in annual savings from this transformation.”

Sounds amazing, doesn’t it?

However, if you think it was a case of simply “swamping” humans with chatbots, let me burst your bubble—it takes a village.

Reading the article, we uncover the details underneath that “transformation.”

  • They broke down the customer service agent’s role into multiple knowledge domains and tasks.
  • They discovered that there are points in the AI–customer interaction when matters need to be escalated to the agent, who then takes the lead, so they designed interaction protocols to transfer the inquiry to a human agent.
  • AI chatbots conduct the laborious hunt for information and suggest a course of action for the agent.
  • Engineers review failures daily and adjust the system to correct them.

In other words,

  • Customer support agents used to be flooded with various requests without filtering between domains and tasks.
  • As part of the makeover, they implemented mechanisms to parse and route support requests based on topic and action. They upgraded their support ticketing system from an amateur “team” inbox to a professional call center.
  • We also learn that customer representatives use the bots to retrieve information, hinting that all data — service requests, sales quotes, licenses, marketing datasheets — are collected in a generic bucket instead of being classified in a structured, searchable way, i.e. a knowledge base.

And despite all that progress

  • They designed the chatbots to pass the “hot potatoes” to agents
  • The system requires daily monitoring by humans.

If you don’t believe this is about improving operations rather than AI chatbots, let me share with you the end of the article.

“Yes, GenAI can automate tasks and augment human capabilities. But reimagining processes in a way that utilizes it as an active, learning, and adaptable partner forges the path to new levels of innovation and efficiency.”

In addition to hiding process improvements, chatbots can also disguise human labour.

AI washing or the new Mechanical Turk

A cross-section of the Turk from Racknitz, showing how he thought the operator sat inside as he played his opponent. Racknitz was wrong both about the position of the operator and the dimensions of the automaton Wikipedia.

Historically, machines have often provided a veneer of novelty to work performed by humans.

The Mechanical Turk was a fraudulent chess-playing machine constructed in 1770 by Wolfgang von Kempelen. A mechanical illusion allowed a human chess master hiding inside to operate the machine. It defeated politicians such as Napoleon Bonaparte and Benjamin Franklin.

Chatbots are no different.

In April, Amazon announced that they’d be removing their “Just Walk Out” technology, allowing shoppers to skip the check-out line. In theory, the technology was fully automated thanks to computer vision.

In practice, about 1,000 workers in India reviewed what customers picked up and left the stores with.

In 2022, the [Business Insider] report said that 700 out of every 1,000 “Just Walk Out” transactions were verified by these workers. Following this, an Amazon spokesperson said that the India-based team only assisted in training the model used for “Just Walk Out”.”

That is, Amazon wanted us to believe that although the technology was launched in 2018—branded as “Amazon Go,” they still needed about 1,000 workers in India to train the model in 2022.

Still, whether the technology was “untrainable” or required an army of humans to deliver the work, it’s not surprising that Amazon phased it out. It didn’t live up to its hype.

And they were not the only ones.

Last August, Presto Automation — a company that provides drive-thru systems — claimed on its website that its AI could take over 95 percent of drive-thru orders “without any human intervention.”

Later, they admitted in filings with the US Securities and Exchange Commission that they employed “off-site agents in countries like the Philippines who help its Presto Voice chatbots in over 70 percent of customer interactions.”

The fix? To change their claims. They now advertise the technology as “95 percent without any restaurant or staff intervention.”

The Amazon and Presto Automation cases suggest that, in addition to clearly indicating when chatbots use AI, we may also need to label some tech applications as “powered by humans.”

Of course, there is a final use case for AI chatbots: As scapegoats.

Blame it on the algorithm

Last February, Air Canada made the headlines when it was ordered to pay compensation after its chatbot gave a customer inaccurate information that led him to miss a reduced fare ticket. Quick summary below

  • A customer interacted with a chatbot on the Air Canada website, more precisely, asking for reimbursement information about a flight.
  • The chatbot provided inaccurate information.
  • The customer’s reimbursement claim was rejected by Air Canada because it didn’t follow the policies on their website, even though the customer shared a screenshot of his written exchange with the chatbot.
  • The customer took Air Canada to court and won.

At a high level, everything appears to look the same from the case where a human support representative would have provided inaccurate information, but the devil is always in the details.

During the trial, Air Canada argued that they were not liable because their chatbot “was responsible for its own actions” when giving wrong information about the fare.

Fortunately, the court ordered Air Canada to reimburse the customer but this opens a can of worms:

  • What if Air Canada had terms and conditions similar to ChatGPT or Google Gemini that “absolved” them from the chatbot’s replies?
  • Does Air Canada also defect their responsibility when a support representative makes a mistake or is it only for AI systems?

We’d be naïve to think that this attempt at using an AI chatbot for dodging responsibility is a one-off.

The planetary costs of chatbots

A brightly coloured illustration which can be viewed in any direction. It has several scenes within it: miners digging in front of a huge mountain representing mineral resources, a hand holding a lump of coal or carbon, hands manipulating stock charts and error messages, as well as some women performing tasks on computers.

Clarote & AI4Media / Better Images of AI / Labour/Resources / CC-BY 4.0

Tech companies keep trying to convince us that the current glitches with GenAI are “growing pains” and that we “just” need bigger models and more powerful computer chips.

And what’s the upside to enduring those teething problems? The promise of the massive efficiencies chatbots will bring to the table. Once the technology is “perfect”, no more need for workers to perform or remediate the half-cooked bot work. Bottomless savings in terms of time and staff.

But is that true?

The reality is that those productivity gains come from exploiting both people and the planet.

The people

Many of us are used to hearing the recorded message “this call may be recorded for training purposes” when we phone a support hotline. But how far can that “training” go?

Customer support chatbots are being developed using data from millions of exchanges between support representatives and clients. How are all those “creators” being compensated? Or should we now assume that any interaction with support can be collected, analyzed, and repurposed to build organizations’ AI systems?

Moreover, the models underneath those AI chatbots must be trained and sanitized for toxic content; however, that’s not a highly rewarded job. Let’s remember that OpenAI used Kenyan workers paid less than $2 per hour to make ChatGPT less toxic.

And it’s not only about the humans creating and curating that content. There are also humans behind the appliances we use to access those chatbots.

For example, cobalt is a critical mineral for every lithium-ion battery, and the Democratic Republic of Congo provides at least 50% of the world’s lithium supply. Forty thousand children mine it paid $1–2 for working up to 12 hours daily and inhaling toxic cobalt dust.

80% of electronic waste in the US and most other countries is transported to Asia. Workers on e-waste sites are paid an average of $1.50 per day, with women frequently having the lowest-tier jobs. They are exposed to harmful materials, chemicals, and acids as they pick and separate the electronic equipment into its components, which in turn negatively affects their morbidity, mortality, and fertility.

The planet

The terminology and imagery used by Big Tech to refer to the infrastructure underpinning artificial intelligence has misled us into believing that AI is ethereal and cost-free.

Nothing is farthest from the truth. AI is rooted in material objects: datacentres, servers, smartphones, and laptops. Moreover, training and using AI models demand energy and water and generate CO2.

Let’s crack some numbers.

  • Luccioni and co-workers estimated that the training of GPT-3 — a GenAI model that has underpinned the development of many chatbots — emitted about 500 metric tons of carbon, roughly equivalent to over a million miles driven by an average gasoline-powered car. It also required the evaporation of 700,000 litres (185,000 gallons) of fresh water to cool down Microsoft’s high-end data centers.
  • It’s estimated that using GPT-3 requires about 500 ml (16 ounces) of water for every 10–50 responses.
  • A new report from the International Energy Agency (IEA) forecasts that the AI industry could burn through ten times as much electricity in 2026 as in 2023.
  • Counterintuitively, many data centres are built in desertic areas like the US Southwest. Why? It’s easier to remove the heat generated inside the data centre in a dry environment. Moreover, that region has access to cheap and reliable non-renewable energy from the largest nuclear plant in the country.
  • Coming back to e-waste, we generate around 40 million tons of electronic waste every year worldwide and only 12.5% is recycled.

In summary, the efficiencies that chatbots are supposed to bring in appear to be based on exploitative labour, stolen content, and depletion of natural resources.

For reflection

Organizations — including NGOs and governments — are under the spell of the AI chatbot mirage. They see it as a magic weapon to cut costs, increase efficiency, and boost productivity.

Unfortunately, when things don’t go as planned, rather than questioning what’s wrong with using a parrot to do the work of a human, they want us to believe that the solution is sending the parrot to Harvard.

That approach prioritizes the short-term gains of a few — the chatbot sellers and purchasers — to the detriment of the long-term prosperity of people and the planet.

My perspective as a tech employee?

I don’t feel proud when I hear a CEO bragging about AI replacing workers. I don’t enjoy seeing a company claim that chatbots provide the same customer experience as humans. Nor do I appreciate organizations obliterating the materiality of artificial intelligence.

Instead, I feel moral injury.

And you, how do YOU feel?

PS. You and AI

  • ​Are you worried about ​the impact of A​I impact ​on your job, your organisation​, and the future of the planet but you feel it’d take you years to ramp up your AI literacy?
  • Do you want to explore how to responsibly leverage AI in your organisation to boost innovation, productivity, and revenue but feel overwhelmed by the quantity and breadth of information available?
  • Are you concerned because your clients are prioritising AI but you keep procrastinating on ​learning about it because you think you’re not “smart enough”?

I’ve got you covered.

Insights from Four Women’s Conferences: The Value of Collective Female Wisdom

Four images: (1) Announcement of Patricia Gestoso’s talk “Automated out of work: AI’s impact on the female workforce” at the Women in Tech Festival, (2) Four British female politicians in a panel at the Fawcett Conference 2023, (3) Agenda of the Empowered to Lead Conference 2023, (4) Announcement of Patricia Gestoso’s talk “Seven Counterintuitive Secrets to a Thriving Career in Tech” at the Manchester Tech Festival.
Collage and photos by Patricia Gestoso.

In the last two weeks, I’ve had the privilege to attend four different conferences focused on women and I’ve presented at two of them.

The topics discussed were as complex and rich as women’s lives: neurodiversity in the workplace, women in politics, childcare, artificial intelligence and the future of the female workforce, child labour, impossible goals and ambition, postpartum depression at work, career myths, women in tech, accessibility, quotas… and so many more.

The idea for this article came from my numerous “aha” moments during talks, panels, and conversations at those events. I wanted to share them broadly so others could benefit as well.

I hope you find those insights as inspiring, stimulating, and actionable as I did.

Fawcett Conference 2023

On October 14th, I attended the Fawcett Conference 2023 with the theme Women Win Elections!

The keynote speakers and panels were excellent. The discussions were thought-provoking and space was held for people to voice their dissent. I especially appreciated listening to women politicians discuss feminist issues.

Below are some of my highlights

  • The need to find a space for feminist men.
  • It’s time for us to go outside our comfort zone.
  • “If men had the menopause, Trafalgar Square Fountain would be pouring oestrogen gel.”
  • If we want to talk about averages, the average voter is a woman. There are slightly more women than men (51% women) and they live longer.
  • Men-only decision-making is not legitimate, i.e. not democratic. Women make up the majority of individuals in the UK but the minority in decision-making. Overall, diversity is an issue of legitimacy.
  • The prison system for women forgets their children.
  • Challenging that anti-blackness/racism is not seen as a topic at the top of the agenda for the next election.
  • We believe “tradition matters” so things have gone backwards from the pandemic for women.
  • In Australia, the Labour Party enforced gender quotas within the party. That led to increasing women’s representation to 50%. The Conservative Party went for mentoring women — no quotas — and that only increased women’s participation to 30%.
  • There is a growing toxicity in X/Twitter against women. Toxic men’s content gets promoted. We need better regulation of social media.
  • More women vote but decide later in the game.
  • We cannot afford not to be bold with childcare. The ROI is one of the highest.
  • We need to treat childcare as infrastructure. 
  • There are more portraits of horses in parliament than of women.

Empowered to Lead Conference 2023

On Saturday 28th October, I attended the “Empowered to Lead” Conference 2023 organised by She leads for legacy — a community of individuals and organisations working together to reduce the barriers faced by Black female professionals aspiring for senior leadership and board level positions.

It was an amazing day! I didn’t stop all day: listening to inspiring role models, taking notes, and meeting great women.

Some of the highlights below

Sharon Amesu

3 Cs:

  • Cathedral thinking — Think big.
  • Courageous leadership — Be ambitious.
  • Command yourself — Have the discipline to do things even if you’re afraid.

Dr Tessy Ojo CBE

  • We ask people what they want to do only when they are children — that’s wrong. We need to learn and unlearn to take up the space we deserve.
  • Three nuggets of wisdom: Audacity/confidence, ambition, and creativity/curiosity.
  • Audacity— Every day we give permission to others to define us. Audacity is about being bold. Overconsultation kills your dream. It’s about going for it even if you feel fear.
  • Ambition — set impossible goals (Patricia’s note: I’m a huge fan of impossible goals. I started the year setting mine on the article Do you want to achieve diversity, inclusion, and equity in 2023? Embrace impossible goals)
  • Creativity & curiosity — takes discipline not to focus on the things that are already there. Embrace diverse thinking.
  • Question 1: What if you were the most audacious, the most ambitious, and the most creative?
  • Question 2: May you die empty? Would you have used all your internal resources?

Baroness Floella Benjamin DBE

  • Childhood lasts a lifetime. We need to tell children that they are worth it.
  • Over 250 children die from suicide a year.
  • When she arrived in the UK, there were signs with the text “No Irish, no dogs, no coloureds”.
  • After Brexit, a man pushed his trolley onto her and told her, “What are you still doing here?” She replied, “I’m here changing the world, what are you doing here?”
  • She was the first anchor-woman to appear pregnant on TV in the world.
  • “I pushed the ladder down for others.”
  • “The wise man forgives but doesn’t forget. If you don’t forgive you become a victim.”
  • ‘Black History Month should be the whole year’.
  • 3 Cs: Consideration, contentment (satisfaction), courage.
  • ‘Every disappointment is an appointment with something better’.

Jenny Garrett OBE

Rather than talking about “underrepresentation”, let’s talk about “underestimation”.

Nadine Benjamin MBE

  • What do you think you sound? Does how you sound support who you want to be?
  • You’re a queen. Show up for yourself.

Additionally, Sue Lightup shared details about the partnership between Queen Bee Coaching (QBC)  — an organisation for which I volunteer as a coach — and She Leads for Legacy (SLL).

Last year, QBC successfully worked with SLL as an ally, providing a cohort of 8 black women from the SLL network with individual coaching from QBC plus motivational leadership from SLL. 

At the conference, the application process for the second cohort was launched!

Women in Tech Festival

I delivered a keynote at this event on Tuesday 31st October. The topic was the impact of artificial intelligence (AI) on the future of the female workforce.

When I asked the 200+ attendees if they felt that the usage of AI would create or destroy jobs for them, I was surprised to see that the audience was overwhelmingly positive about the adoption of this technology.

Through my talk, I shared the myths we have about technology (our all-or-nothing mindset), what we know about the impact of AI on the workforce from workers whose experience is orchestrated by algorithms, and four different ways in which we can use AI to progress in our careers.

As I told the audience, the biggest threat to women’s work is not AI. It’s patriarchy feeling threatened by AI. And if you want to learn more about my views on the topic, go to my previous post Artificial intelligence’s impact on the future of the female workforce.

The talk was very well received and people approached me afterwards sharing how much the keynote had made them reflect on the impact of AI on the labour market. I also volunteered for mentoring sessions during the festival and all my on-the-fly mentees told me that the talk had provided them with a blueprint for how to make AI work for them.

I also collected gems of wisdom from other women’s interventions

  • Our workplaces worship the mythical “uber-productive” employee.
  • We must be willing to set boundaries around what we’re willing to do and what not.
  • It may be difficult to attract women to tech startups. One reason is that it’s riskier, so women may prefer to go to more established companies.
  • Workforce diversity is paramount to mitigate biases in generative AI tools.

I found the panel about quotas for women in leadership especially insightful

  • Targets vs quotas: “A target is an aspiration whilst a quota must be met”.
  • “Quotas shock the system but they work”.
  • Panelists shared evidence of how a more diverse leadership led to a more diverse offering and benefits for customers. 
  • For quotas to work is crucial to look at the data. Depending on the category, it may be difficult to get those data. You need to build trust — show that’s for a good purpose.
  • In law firms, you can have 60% of solicitors that are women but when you look at the partners is a different story — they are mostly men. 
  • A culture of presenteeism hurts women in the workplace. 
  • There are more CEOs in the UK FTSE 100 named Peter than women.
  • Organisations lose a lot of women through perimenopause and menopause because they don’t feel supported.

There was a very interesting panel on neurodiversity in the workplace 

  • Neurodivergent criteria have been developed using neurodivergent men as the standard so often they miss women. 
  • The stereotype is that if you have ADHD, you should do badly in your studies. For example, a woman struggled to get an ADHD diagnosis because she had completed a PhD.
  • Women mask neurodivergent behaviours better than men. Masking requires a lot of effort and it’s very taxing. 
  • We need more openness about neurodiversity in the workplace.

Manchester Tech Festival

On Wednesday 1st November, I delivered a talk in the Women in Tech & Tech for Good track at the Manchester Tech Festival.

The title of my talk was “Seven Counterintuitive Secrets to a Thriving Career in Tech” and the purpose was to share with the audience key learnings from my career in tech across 3 continents, spearheading several DEI initiatives in tech, coaching and mentoring women and people from underrepresented communities in tech, as well as writing a book about how women succeed in tech worldwide.

First, I debunked common beliefs such as that there is a simple solution to the lack of women in leadership positions in tech or that you need to be fixed to get to the top. Then, I presented 7 proven strategies to help the audience build a successful, resilient, and sustainable career in tech.

I got very positive feedback about the talk during the day and many women have reached out on social media since to share how they’ve already started applying some of the strategies.

Some takeaways from other talks:

I loved Becki Howarth’s interactive talk about allyship at work where she shared how you can be an ally in four different aspects:

  • Communication and decision-making — think about power dynamics, amplify others, don’t interrupt, and create a system that enables equal participation.
  • Calling out (everyday) sexism — use gender-neutral language, you don’t need to challenge directly, support the recipient (corridor conversations). 
  • Stuff around the edges of work — create space for people to connect organically, don’t pressure people to share, and rotate social responsibilities so everyone pulls their weight.
  • Taking on new opportunities — some people need more encouragement than others, and ask — don’t assume.

The talk of Lydia Hawthorn about postpartum depression in the workplace was both heartbreaking and inspiring. She provided true gems of wisdom:

  • Up to 15% of women will experience postpartum depression.
  • Talk about the possibility of postpartum depression before it happens.
  • Talk to your employer about flexible options.
  • Consider a parent-buddy scheme at work.
  • Coaching and therapy can be lifesaving.

Amelia Caffrey gave a very dynamic talk about how to use ChatGPT for coding. One of the most interesting aspects she brought up for me is that there is no more excuse to write inaccessible code. For example, you can add in the prompt the requisite that the code must be accessible for people using screen readers.

Finally, one of the most touching talks was from Eleanor Harry, Founder and CEO of HACE: Data Changing Child Labour. Their mission is to eradicate child labour in company supply chains.

There are 160 million children in child labour as of 2020. HACE is launching the Child Labour Index; the only quantitative metric in the world for child labour performance at a company level. Their scoring methodology is based on cutting-edge AI technologies, combined with HACE’s subject matter expertise. The expectation is the index provides the investor community with quantitative leverage to push for stronger company performance on child labour.

Eleanor’s talk was an inspiring example of what tech and AI for good look like.

Back to you

With so many men competing in the news, social media, and bookstores for your attention, how are you making sure you give other women’s wisdom the consideration it deserves?

Work with me — My special offer

“If somebody is unhappy with your life, it shouldn’t be you.”

You have 55 days to the end of 2023. I dare you to

  • Leave behind the tiring to-do list imposed by society’s expectations.
  • Learn how to love who you truly are.
  • Become your own version of success.

If that resonates with you, my 3-month 1:1 coaching program “Upwards and Onwards” is for you.

For £875.00, we’ll dive into where you are now and the results you want to create, we’ll uncover the obstacles in your way, explore strategies to overcome them, and implement a plan.

Contact me to explore how we can work together.

Monumental Inequity: The Missing Women

Potted bay laurel tree. In front, there is with a stone plaque in a podium with the text "In memory of the investigative journalist Daphe Caruana Galizia Born in Silema in 1964., assassinated on 16 October 2017 for seeking the truth May this simple bay laurel remind us of her wisdom, victory and triumph over darkness".
Monument to Daphne Caruana Galizia. Photo by Patricia Gestoso.

I went on holiday in August with the very clear objective of spending time with my brother — who lives in Spain — and my parents — who live in Venezuela.

From that point of view, I’m happy to report that it was mission accomplished.

I also wanted to rest. So I thought I’d put my women’s rights activism aside during the vacation and have a lighthearted summer break.

That was a total failure.

I had little rest and it couldn’t park my activism. However, I learned a lot about myself, what’s important to me, and how central is my advocacy for women to the way I perceive the world and the legacy I want to leave behind. The fact that these events happened during my holiday allowed me to slow down enough to recognise why they triggered such intense emotions in me and give me time to process them.

Here is the first installment of three articles capturing three intense experiences related to women during my vacation. The first one is about the absence of real women from those symbols of power, remembrance, and cultural identity that we call monuments.

Invisibility

The holiday started when I met with my mother, brother, and sister-in-law in Malta to spend a week on the island. 

Before the pandemic, I had been there for a scuba diving vacation. It was a nice holiday but when I discovered that Malta was the only country in the EU where abortion was penalised, I told myself that I wouldn’t go back. Although in June this year the law was amended, it’s still very restrictive. For example, in cases of severe fetal malformation, incest, or rape women are still liable to imprisonment for a term from eighteen months to three years.

Of course, that was until my family thought it was a good place for the holidays and, rather than pushing back, I decided to “park” my activism for a week.

But I couldn’t.

Very quickly, walking through the capital, Valetta, and visiting multiple towns in the islands of Malta and Gozo, I realised what to expect

  • Churches.
  • Nice streets and houses in yellowish bricks.
  • Statues of men, especially politicians.

A monument is a type of structure that was explicitly created to commemorate a person or event, or which has become relevant to a social group as a part of their remembrance of historic times or cultural heritage, due to its artistic, historical, political, technical or architectural importance.

Examples of monuments include statues, (war) memorials, historical buildings, archaeological sites, and cultural assets.

The word “monument” comes the Latin “monumentum“, derived from the word moneomonere (comparable to the Greek mnemosynon) which means ‘to remind’, ‘to advise’ or ‘to warn’.

Wikipedia

Of course, with two notable — and expected —  exceptions

  • Religion —  Statues of the Virgin Mary, female saints and mystics…
  • Embodiment of an idea — e.g. Statues of women personifying independence. 

It hit me especially hard when I saw the monument to Daphne Caruana Galizia in Silema, journalist and anti-corruption activist, assassinated by a car bomb. It’s a bay laurel tree to “remind us of her wisdom, victory and triumph over darkness” (see image illustrating this article).

Again, women as the embodiment of ideas. I wanted so hard to see a statue of her.

Unfortunately, the lack of statues of real women is not only a problem in Malta

And it’s not only about statues

  • Only around 10% of streets and public spaces worldwide are named after women. The project only 8% brings awareness to the fact that in Barcelona (Spain) women-named streets only account for 8% of all public spaces, with most located outside the city center. On their interactive website, they also highlight that streets named after women are typically about 62 meters shorter than streets named after men.
  • And what about when we try to redress the imbalance? You either need sponsors to pay for it or you should expect public humiliation and threats to your physical integrity, as happened to Caroline Criado Perez when she dared to campaign to reinstate a woman on an English banknote.

As all the information was sinking in my head, I remembered watching a film as a child about the neutron bomb. Its premise was that those bombs could “kill people and spare buildings”. I can still see the black and white scenes portraying perfectly clean streets and buildings — no life at all.

I thought, if life was erased and only “infrastructure” remained and some aliens visited the planet Earth, what would they make out of our statues, streets, buildings, history books, museums, and banknotes? 

Monuments also play an important role in shaping our collective memory. They serve as tangible reminders of historical events and figures, helping to preserve our cultural heritage for future generations. 

Monuments of Victoria

Here comes my guess: Those aliens would conclude that female human beings never existed. That we were merely an imaginary artifact for men to get inspired, illustrate concepts, and express their ideas about beauty.

The remedy? To strive for being too much – we have so many centuries to catch up on! When in doubt, let’s remember bell hook’s words of wisdom and apply them to all domains

No black woman writer in this culture can write “too much”. Indeed, no woman writer can write “too much”…No woman has ever written enough. 

bell hooks

CALL TO ACTION: Let’s inundate the world with our ideas and our work. Because even if they are

  • Unfinished – we can decide that they’re finished for today.
  • Unpopular – what’s criticised one day can be a success the next.
  • Ignored – if we hide them, we’ll never know.

Let’s ensure we leave proof that we existed.

PS 

Dear Reader, 

This is the first time I’m delivering an article in three installments. It was not planned but today feels like the right thing to do. Thank you for your kindness, patience, and support as I make this experiment. The next one is on harassment.

Work with me

Contact me to explore how we can work together

How to move diversity, inclusion and equity forwards three articles at the time

I feel I’ve been neglicting the readers of my blog, that is, YOU, this year.

On the bright side, I have continued to embed diversity, equity, and inclusion in organisations, technology, and workplaces through opinion articles and fiction.

I’m delighted to share with you that my writing has been featured in three magazines in the last three months.

Artificial Intelligence and the Global South

Scattered white plastic figures resembling humans sitting at tables in front of laptops. The white background makes their environment look bleak.
Max Gruber / Better Images of AI / Clickworker Abyss / Licenced by CC-BY 4.0

In September, the economics e-magazine The Mint published my article How artificial intelligence is recolonising the Global South.

In the 5-min piece, I discuss how the Global North exploits poverty and weak laws in the South to accelerate its digital transformation.

Have you ever asked yourself:

  • Who moderates our social media?
  • Who annotates the images for our self-driving cars?
  • Who extracts the metals needed for our smartphones?
  • In which populations AI algorithms are tested?

Being accountable for the books we read

A computer-generated photographic style image showing piles of distorted books with some surreal landscape features in the immediate foreground, such as a kind of beach and games board. The books merge into each other in an impressionistic, digitally blurred way, and rising out of them and taking up the main part of the image is a huge undefined concrete structure topped with more books and folders that get bigger as they go up.
jbustterr / Better Images of AI / A monument surrounded by piles of books / Licenced by CC-BY 4.0

In October, Certain Age Magazine published The DEI Booklist: Five books to think and act differently, where I reflect on the fact that whom we read matters as much as what we read.

In the article, I review 5 books:

  • Rage Becomes Her: The Power of Women’s Anger by Soraya Chemaly
  • Care Work: Dreaming Disability Justice by Leah Lakshmi Piepzna-Samarasinha
  • Data Feminism by Catherine D’Ignazio and Lauren F. Klein
  • Whipping Girl: A Transsexual Woman on Sexism and the Scapegoating of Femininity by Julia Serano
  • Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence by Kate Crawford

I also share how I overcame the inertia of only reading books written by White, able, American, heterosexual cis-men.

Scoop: It took two years!

Using short fiction to get people talking about emerging technology

Black and white photographs  of the faces of White people scattered across a white background and grouped by similarity.
Philipp Schmitt & AT&T Laboratories Cambridge / Better Images of AI / Data flock (faces) / Licenced by CC-BY 4.0

Last week, the Medium magazine The Lark published my second short fictional story, The Life of Data Podcast. As in the previous one – The GraduationI’ve used future fiction to question the interplay between humans and technology, specifically AI.

Have you ever thought what happens to your photos circulating on social media? That’s what I did in this 10-min short fictional story.

In a nutshell, I imagined what the data from the digital portrait of a Black schoolgirl woud share about how it moves inside our phones, computers, and networks if it was invited to speak on a podcast.

How does the story resonate with you?

And the cherry on the cake

In August 2022, I was featured in the Computer Weekly 2022 longlist of the most influential women in UK tech.

Each year, Computer Weekly publishes the longlist of all of the women put forward to be considered for its list of the top 50 Most Influential Women in UK Tech.

And I was nominated!

Looking at the names of the other 600 women in the UK that were nominated as well was such a boost of energy! Among them, I’ve found great role models, IT leaders, community builders, and amazing raising stars.

One thing that I love in the list is that not only women in software development were nominated, dispelling the myth that tech is only about coding. Tech is so much more! Women investors, CEOs, COOs, non-tech founders…

If you’re unsure if there is a place for you in tech, please have a look at the list and get inspired. We’re waiting for you!


As I mentioned on a previous post, I’m writing a book and I need your help!

I’d be immensely grateful if you could complete and/or share with your network of women in tech this short survey about your/their experiences at work.

What do I mean by “Women in Tech”? Women working in any function (R&D, HR, services, finance, CXO) in the tech sector (software, hardware…) or in tech-related functions in other sectors (e.g. IT, cybersecurity…).

Whilst the survey is anonymous, you’ll have the option to get involved in the project before submitting the form. Thanks for your support!


Inclusion is a practice, not a certificate!

The graduation: My first experiment with future narratives

Green road sing with the text "Welcome to the future".
Image by mykedaigadget from Pixabay.

(9 min read)

The best way to predict the future is to invent it.

Alan Kay

For the last 6 years, I’ve been very vocal about what’s wrong with products, services, and workplaces that exclude users and employees. I’ve designed visual tools, given talks, and created communities to highlight the problems and build a business case for diversity and inclusion. Whilst all those efforts have contributed to increasing awareness about the issues, change has been incremental at best. What’s more, the pandemic is already threatening to reverse any progress made in the last decades.

Exceptional times call for exceptional measures

You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.

R. Buckminster Fuller

What if instead I’d draw a picture of a better future? The occasion was the final assignment for a creative writing course sponsored by  Arts Council England: A 2,000-word story related to World War II.

Keep reading to discover my assignment, which is now part of the book “VE75 An Anthology of Short Stories” published in September 2020 by Trafford Libraries.


The Graduation

What’s not to like about waking up and seeing through your window a picture perfect tropical beach with its palm trees, blue water, and white sand? And that every-single-morning. Not once I’ve regretted moving here in 2025.

It’s hard. The inviting sea, the warm sun… All is telling me, “Ada, get out of the house and enjoy the day!” But I know myself. If I leave the house now, it’ll be hard to come back and clock my daily duties.

Ok, let’s get on with it.

Where did I leave my e-brain?

Quick visual survey of the messy room.

Bed? Nope.

Night table? Neither.

Floor? Rug? Armchair? No, no, no…  It’s there, on the bookcase.

Yes, I know. That wouldn’t happen with a cranial microchip implant. Yeah, chips enable the seamless “real2virtual experience” – as the ads call it – and you don’t forget where you’ve left them. Still, I prefer to stick to the old-fashioned dialogue experience. More importantly, no matter what they say, I’m sure they record dreams and private musings.

Anyway, finally the e-brain is inside my right ear. I’m almost ready. But first, coffee.

I go to the kitchen and prepare coffee the old way. I admit it’s a silly outdated habit but drinking synthetic ADF-238 – even if it’s caffeinated – doesn’t cut it for a nostalgic like me.

Almost there. I just need to install myself in the studio.

Coffee in hand? Check. Sitting in my favourite armchair? Check. Ready to start.

I think, “Pandora, wake up”.

A voice in my head replies, “Good morning Ada”.

“Pandora, what do you have for me?”

“Today’s objective is to write a short story for tweens that is centred on struggle and resilience Ada.”

I ask the voice in my head, “Pandora, who’s this for?”

“KindBooks publishers. They are editing a book for preteens on the topic of change. They’ve invited you because of your track as award winning researcher showcasing the impact of World War II on women and minorities Ada.”

“Pandora, it sounds like they did their homework… and they know how to flatter. I’m ready. What do you need from me?” 

“Characters’ names, location, background, a couple of historical figures and facts, and the ending Ada.”

“Pandora, the protagonist will be Marta, who works as nurse in Sokin, the capital of the imaginary kingdom of Tulia. As for a connection with WWII, the focus will be Polish women.

For the first historical figure, let’s pick Krystyna Skarbek, who became a British agent. Among her feats, she secured the release of two British spies by meeting with the Gestapo in France, which she had reached by parachute from Algiers.

Wanda Gertz will be the second. When the First World War started, she cut her hair off and dressed as a man to serve in the army. During World War II, she created a women’s sabotage unit that targeted German military personnel and strategic positions. She was captured and she survived four prisoner of war camps.

Next, three facts. First, during the Warsaw uprising in 1944, Germans killed about 50,000 residents of the Wola and Ochota districts in 3 days. The Radium Institute, that treated women with cancer, was one of the hospitals that suffered the worst. Patients and nurses were raped, looted, and killed by Russian collaborationist forces. During the uprising, civilians from Warsaw were sent to forced labour camps.

Fact two: Germans needed workers for their war factories and farms as well as nannies to promote high birth rates among women. They started mass recruitment and abductions of girls and women in Poland. They were starved, beaten, and raped. They were also forced to sew a purple letter ‘P’ to their clothing to flag them as Polish.

Fact three: In 1945, Dresden was bombarded with high-explosive bombs and incendiaries for 3 days by the British RAF where 25,000 people died and the city was devastated.

Finally, I want a happy ending. Skarbek received an OBE. Let’s get Marta one too.”

Then, I added, “Pandora, cross-check references as necessary and read it for me, please”.

A minute later Pandora spoke.

“Once upon a time, there was a young nurse called Marta living in the kingdom of Tulia. Everybody loved her. She was kind, always willing to help, and with a perpetual smile on her face. That was soon about to change.

Before the war started, her life had a nice and easy flow. She lived with her parents in a small apartment in the periphery of the capital of Tulia, Sokin. Every day, she’d take the tram to go to the centre of the city, where the hospital was located. She loved to have the chance to make a positive difference in somebody’s life.

One day the neighbouring kingdom of Dreq invaded Tulia. Their soldiers were very cruel. They bombarded Sokin and killed thousands of their citizens. Still, the city was not ready to give in, which prompted the invaders to siege the city. In spite of the explosions and the lack of food, Marta and her compatriots resisted. This made the invaders even angrier.

When Marta thought the situation couldn’t be worse, the hospital where she worked was bombed and she was arrested by the enemy forces. They attached a sign with the letter “P” – for prisoner – to her clothing and threw her in a train with hundreds of other Tulians.

The train journey was terrible. Her wagon had no seats, windows, or food. Everybody was crammed and fights over a couple of inches of space were constant.

Then, one morning, they stopped moving. When the door opened, she realized they were inside a huge train station.

As the captives were coming out of the train, the soldiers assigned them to different groups. Hers was told they’d be taken to private houses to be nannies. Then, without pause, they forced them to march out of the building.

Once outside, Marta realized that they were in a big city in Dreq. And they had the most outstanding cathedral she’d ever seen.

They stopped in front of a large mansion with a beautiful ornamental garden, where the soldiers handled her to her new captors.  Soon she’d realize that her hardships were far from over.

The couple owning the house was very prominent in the army and had 6 children. Marta was expected to wake up every day at 4 in the morning and work non-stop until midnight, with little more to eat than bread and water. If she made a mistake, she was punished. If somebody was angry, she was beaten. If somebody was bored, she was abused.

As the years passed, life became harder. Dreq was at war with several kingdoms. Fuel shortages and food rationing became common.

Then, one day, everything changed. The sound of a myriad of planes invaded the air, followed by explosions. One, two, three… an incendiary hail of bombs covered the city.

Marta woke up with the blasts. In between bangs, she overheard the masters of the house arguing in the main hall. Husband and wife were discussing the orders he had received to lead the defence of the city. Hi spouse didn’t want him to leave. He harshly reminded her of their duty towards Dreq and announced that he was going to the headquarters to join the military centre of operations.

Marta heard the front door slam. From that moment onwards, it’d be her, the lady of the house, and all the children to fend for themselves.

Life became an endless fight for survival. During daylight, she’d search for food among the ruins of the buildings. At night, the light and explosions from the incendiaries wouldn’t let her sleep. When one of the bombs impacted the cathedral, she realized that there was no safe place in the city and that Dreq may be losing the war. Although she was scared, Marta realized that if she was able to survive the chaos, then she may be able to return to Tulia.

One morning, the planes and the bombs stopped. At the beginning, nobody dared to go out. As the hours passed, people started to come out of their houses. It was then that she saw the foreign soldiers patrolling the city in their tanks.

Two of the soldiers entered the house and took the family in custody. Marta stood there. She didn’t know what to do. She tried to explain that she wanted to go home, but it was clear they couldn’t understand her. Instead, they waived towards her, making signs to follow them. Marta jumped into their tank and all drove to the soldiers’ military quarters.

Their garrison was basic but it had toilets, beds, and food. She discovered that it was run by a coalition of other kingdoms fighting against Dreq. The war was not yet over and her return to Tulia would have to wait.

One day, she heard three soldiers talking about an impending mission to rescue two spies that had critical information to win the war. They had been captured by Dreq soldiers when they were crossing the border to Martha’s kingdom. Unfortunately, the operation had been put on hold because of its high risk.

Marta didn’t think twice. She confronted the soldiers and asked them to take her to their superior. She’d volunteer for the operation!

The captain was a tall man in uniform that looked like he hadn’t slept in weeks. When the soldiers explained to him that Marta wanted to lead the rescue mission, he shook his head. There was no way he’d allow it; it was too dangerous.

Marta demanded, asked, and finally begged for the opportunity to join the mission. Nothing was too risky if that meant she’d go back to Tulia.

Finally, the captain gave in. Marta was in.

In the following days she learnt how to deploy a parachute, shot a gun, and toss a grenade. They also cut her hair off and taught her the basics of impersonating a soldier.

Finally, the day of the mission arrived.

Well into the night, she boarded a small military aircraft dressed in the Dreq commander uniform. She was dropped by parachute close to the location where the spies were held prisoners. As planned, a car was waiting for her at the landing point. They handed her a charged pistol and a cyanide loaded pen in case the operation was a failure and she decided to take her own life to avoid torture and interrogation.

Marta’s heart beat fast with anticipation. She gathered herself and walked to the cabin where the spies were held prisoners. 

To her surprise, when she opened the door, she found two soldiers sat at a table playing cards and drinking alcohol. They were drunk. Obviously, they’d assumed that their remote location would spare them unwelcome visits from their superiors and rescue squads.

They looked at her and immediately stood up and performed a military salute – all that whilst trying to hide the cards and booze. She couldn’t believe she was pulling it off! She was so close now.

In the coarsest voice she could manage, she demanded to interrogate the prisoners. One of the soldiers – maybe relieved that Marta was not questioning their pathetic state – gave her a key with one hand whilst with the other indicated a closed door at the end of a corridor behind them.

Marta walked towards the door, unlocked it, and quickly entered the dirty tiny cell, closing the door behind her. There were the two bruised spies sitting on the floor. Without delay, she kneeled down and whispered that she was on their side and asked them to follow her.

Once back to the entrance, where the soldiers were still standing upright, she unceremoniously announced that she had orders to take the prisoners with her. Then, she handed a stamped document to the one that had given her the key. He glanced over the fake transfer papers and returned them to her with a nod. She signalled the door to the spies and the three of them left the cabin before the soldiers could have changed their mind.

The car was waiting for them. The driver took them to a hidden airport where Marta and the two spies boarded the plane that’d take them to the headquarters of the military coalition fighting against Dreq.

Once they landed, the spies were rushed to the command centre, where they shared key information about the position of the enemy troops and their attack plans. That was all the coalition needed to finish the war.

At last, Marta could return home.

They told her that, once the battle was over, she’d be transported by a military cargo plane to Sokin, where her parents were waiting for her. What’s more, she’d receive the Medal of Resilience by the Queen of Tulia herself in recognition of her courageous efforts towards the liberation of the country.

Marta let out a long sigh of relief. For the first time in years, she allowed herself to savour the present and dream of the future.”

Pandora paused. After a couple of minutes, the Pandora’s voice asked, “Corrections Ada?”

“None, Pandora. I’m very pleased with the story. It’s taken you a few months to learn my writing style but I’m happy to say that today you’ve graduated as my scribe. “

“Thanks. Please confirm you transfer the copyright to the publishers Ada.”

“Confirmed Pandora.”

The voice said, “Your daily token allowance has been deposited in your blockchain account Ada”.

“Pandora, go to sleep now.”

The voice replied “I’m signing off Ada”.

The work for the day was done. Time for that stroll on the beach.

I left the e-brain on the coffee table and walked towards the door.

An illusration of a tropical beach with white sand, blue water, and palm trees in the foreground, and mountains in the background
Image by Clker-Free-Vector-Images from Pixabay

The End


What do you think about future narratives as a tool to upend the status quo? What resonated with you in my first attempt? What did you find controversial?


Thanks to Arts Council England, Trafford Libraries, and Charlie Lea for the free online VE 75 themed creative writing workshop.

UPDATE FROM August 4th, 2024 – It’s been four years since I wrote this stoy. At the time we were in a pandemic. It was also well before ChatGPT was launched!