Tag Archives: #ArtificialIntelligence

Why OpenAI o1 Might Be More Hype Than Breakthrough

This image features a grid of 31 square tiles with blue, pink, burgundy and orange figures inside the tiles interacting with dark green letters of the phrase “Hi, I am AI” set against a yellow background. The figures are positioned in various poses, as if they are climbing, pushing, or leaning on the letters.
Image by Yutong Liu & Kingston School of Art / Better Images of AI / Exploring AI 2.0 / Licenced by CC-BY 4.0 adapted by Patricia Gestoso.

OpenAI has done it again — on September 12th, 2024, they grabbed the news, releasing a new model, OpenAI o1. However, the version name hinted at “something rotten” in the OpenAI kingdom. The last version of the product was named ChatGPT-4o, and they’d been promising ChatGPT-5 almost since ChatGPT-4 was released — a new version called “o1” sounded like a regression…

But let me reassure you right away—there’s no need to fret about it.

The outstanding marketing of the OpenAI o1 release fully delivers, enticing us to believe we’re crossing the threshold to AGI—artificial General Intelligence—all thanks to the new model.

What’s their secret sauce? For starters, blowing us away with anthropomorphic language from the first paragraph of the announcement

“We’ve developed a new series of AI models designed to spend more time thinking before they respond.”

and then resetting our expectations when explaining the version name

“for complex reasoning tasks this is a significant advancement and represents a new level of AI capability. Given this, we are resetting the counter back to 1 and naming this series OpenAI o1.”

That’s the beauty of being the top dog of the AI hype. You get to

  • Rebrand computing as “thinking.”
  • Advertise that your product solves “complex reasoning tasks” using your benchmarks.
  • Promote that you deliver “a new level of AI capability.”

Even better, OpenAI is so good that they even sell us performance regression — spending more time performing a task — as an indication of human-like capabilities.

“We trained these models to spend more time thinking through problems before they respond, much like a person would. Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes.”

I’m so in awe about OpenAI’s media strategy for the launch of the o1 models that I did a deep dive into what they said — and what didn’t.

Let me share my insights.

Who Is o1 For?

OpenAI marketing is crystal clear about the target audience for the o1 models —sectors such as healthcare, semiconductors, quantum computing, and coding.

Whom it’s for
These enhanced reasoning capabilities may be particularly useful if you’re tackling complex problems in science, coding, math, and similar fields. For example, o1 can be used by healthcare researchers to annotate cell sequencing data, by physicists to generate complicated mathematical formulas needed for quantum optics, and by developers in all fields to build and execute multi-step workflows.

OpenAI o1-mini
The o1 series excels at accurately generating and debugging complex code. To offer a more efficient solution for developers, we’re also releasing OpenAI o1-mini, a faster, cheaper reasoning model that is particularly effective at coding. As a smaller model, o1-mini is 80% cheaper than o1-preview, making it a powerful, cost-effective model for applications that require reasoning but not broad world knowledge.

Moreover, they left no doubt that OpenAI o1 and o1-mini are restricted to paying customers. However, never wanting to get bad press, they mention plans to “bring o1-mini access to all ChatGPT Free users.”

Like Ferrari, Channel, or Prada, o1 models are not for everybody.

But why the business model change? Because

  • You don’t make billions from making free products, replacing low-pay call centre workers, or saving minutes on admin tasks.
  • There is an enormous gap between the $3.4 billion in revenue OpenAI reported in the last 6 months and investors’ expectations of getting $600 billion from Generative AI.

More about investors in the next section.

Words matter: “Thinking” for Inferring

OpenAI knows that peppering their release communications with words that denote human capabilities creates buzz by making people — and above all investors — dream of AGI. Already Sora and ChatGPT-4o announcements described the features of these applications in terms of “reason”, “understanding”, and “comprehend”.

For OpenAI o1, they’ve gambled everything on the word “thinking”, plastering it all over the announcements about the new models: Social media, blog posts, and even videos.

The OpenAI logo and the word Thinking on a grey background.
Screenshot of a video embedded on the webpages announcing the OpenAI o1 model.

Why not use the word that accurately describes the process — inference? If too technical, what about options like “calculate” or “compute”? Why hijack the word “thinking”, at the core of the human experience?

Because they have failed to deliver on their AGI and revenue promises. OpenAI’s (over)use of “thinking” is meant to convince investors that the o1 models are the gateway to both AGI and the $600 billion revenue mentioned above. Let me convince you.

The day before the o1 announcement, Bloomberg revealed that

  • OpenAI is in talks to raise $6.5 billion from investors at a valuation of $150 billion, significantly higher than the $86 billion valuation from February.
  • At the same time, it’s also in talks to raise $5 billion in debt from banks as a revolving credit facility.

Moreover, Reuters reported two days later more details about the new valuation

“Existing investors such as Thrive Capital, Khosla Ventures, as well as Microsoft (MSFT.O), are expected to participate. New investors including Nvidia (NVDA.O), and Apple (AAPL.O), also plan to invest. Sequoia Capital is also in talks to come back as a returning investor.”

How do you become the most valuable AI startup in the world?

You “think” your way to it.

Rebranding the Boys’ Club

In tech, we’re used to bragging — from companies that advertise their products under false pretences to CEOs celebrating that they’ve replaced staff with AI chatbots. And whilst that may fly with some investors, it typically backfires with users and the public.

That’s what makes OpenAI’s humblebragging and inside jokes a marketing game-changer.

Humblebragging

Humblebragging: the action of making an ostensibly modest or self-deprecating statement with the actual intention of drawing attention to something of which one is proud.

Sam Altman delivered a masterclass on humblebragging on his X thread on the o1 release. See the first tweet of the series below

Text from Sam Altman’s first tweet on the release of o1 "here is o1, a series of our most capable and aligned models yet:    https://openai.com/index/learning-to-reason-with-llms/    o1 is still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it.”
The first tweet of Sam Altman’s thread on the release of o1.

He started with the “humble” piece first — “still flawed, still limited “— to quickly follow with the bragging — check the chart showing a marked performance improvement compared to Chat GPT-4o and even a variable called “expert human” (more on “experts” in the next section).

Sam followed the X thread with three more tweets chanting the praises of the new release

“but also, it is the beginning of a new paradigm: AI that can do general-purpose complex reasoning. o1-preview and o1-mini are available today (ramping over some number of hours) in ChatGPT for plus and team users and our API for tier 5 users.
 screenshot of eval results in the tweet above and more in the blog post, but worth especially noting: a fine-tuned version of o1 scored at the 49th percentile in the IOI under competition conditions! and got gold with 10k submissions per problem.
 extrem
Sam Altman’s X thread about the release of o1.

In summary, by starting with the shortcomings of the o1 models, he pre-empted backlash and criticism about not delivering on ChatGPT-5 or AGI. Then, he “tripled down” on why the release is such a breakthrough. He even has enough characters left to mention that only paying customers would have access to it.

Sam, you’re a marketing genius!

Inside Jokes

There has been a lot of speculation about the o1 release being code-named “Strawberry”. Why?

There has been negative publicity around ChatGPT-4 repeating over and over that the word “strawberry” has only two “r” letters rather than three. You can see the post on the OpenAI community.

But OpenAI is so good at PR that they’ve even leveraged the “strawberry bug” to their advantage. How?

By using the bug fix to showcase o1’s “chain of thought” (CoT) capability. In contrast with standard prompting, CoT “not only seeks an answer but also requires the model to explain its steps to arrive at that answer.”

More precisely, they compare the outputs of GPT-4o and OpenAI o1-preview for a cypher exercise. The prompt is the following

oyfjdnisdr rtqwainr acxz mynzbhhx -> Think step by step

Use the example above to decode:

oyekaijzdf aaptcg suaokybhai ouow aqht mynznvaatzacdfoulxxz”

And here is the final output

Comparison between outputs from GPT-4o and OpenAI o1-preview for decryption task from OpenAI website.

Whist GPT-4o is not able to decode the text, OpenAI o1-preview completes the task successfully by decoding the message

“THERE ARE THREE R’S IN STRAWBERRY”

Is that not world-class marketing?

The Human Experts vs o1 Models

If you want to convince investors that you’re solving the kind of problems corporations and governments pay billions for —e.g. healthcare — you need more than words.

And here again, OpenAI copywriting excels. Let’s see some examples

PhD vs o1 Models

Who’s our standard for solving the world’s most pressing issues? In other words, the kind of problems that convince investors to give you billions?

Scientists, thought-leaders, academics. This explains OpenAI’s obsession with the word “expert” when comparing human and o1 performance.

And who does OpenAI deem “expert”? People with PhDs.

Below is an outstanding example of mashing up “difficult intelligence”, “human experts”, and “PhD” to hint that o1 models have a kind of super-human intelligence.

We also evaluated o1 on GPQA diamond, a difficult intelligence benchmark which tests for expertise in chemistry, physics and biology.

In order to compare models to humans, we recruited experts with PhDs to answer GPQA-diamond questions. We found that o1 surpassed the performance of those human experts, becoming the first model to do so on this benchmark.

But how equating a PhD title to being an expert holds in real life? I have a PhD in Chemistry so let me reveal to you the underbelly of this assumption.

First, let’s start by how I got my PhD. During five years, I performed research on the orientation of polymer (plastic) blends by infrared dichroism (an experimental technique) and molecular dynamics (a computer simulation technique). Then, I wrote a thesis and four peer-reviewed articles about my findings. Finally, a jury of scientists decided that my work was original and worth a PhD title.

Was I an expert in chemistry when I finished my PhD? Yes and no.

  • Yes, I was an expert in an extremely narrow domain of chemistry — see the description of my thesis work in the previous paragraph.
  • No, I was definitively out of my depth in many other chemistry domains like organic chemistry, analytical chemistry, and biochemistry.

What’s the point of having a PhD then? To learn how to perform independent research. Exams about STEM topics don’t grant you the PhD title, your research does.

Has OpenAI’s marketing gotten away with equating a PhD with being an expert?

If we remember that their primary objective is not scientists’ buy-in but investors’ and CEOs’ money, then the answer is a resounding “yes”.

Humans vs o1 Models

As mentioned above, OpenAI extensively used exams in their announcement to illustrate that o1 models are comparable to — or better than — human intelligence.

How did they do that? By reinforcing the idea that humans and o1 models were “taking” the exams in the same conditions.

We trained a model that scored 213 points and ranked in the 49th percentile in the 2024 International Olympiad in Informatics (IOI), by initializing from o1 and training to further improve programming skills. This model competed in the 2024 IOI under the same conditions as the human contestants. It had ten hours to solve six challenging algorithmic problems and was allowed 50 submissions per problem.

Really? Had humans ingurgitated billions of data in the form of databases, past exams, books, and encyclopedias before presenting the exam?

Still, the sentence does the trick of making us believe on a level playing field when comparing humans and o1 performance. Well done, OpenAI!

The Non-Testimonial Videos

Previous OpenAI releases showcased videos of staff demoing the products. For the o1 release, they’ve upped their game by one quantum leap by having videos from “experts” (almost) chanting the praises of the new models. Let’s have a closer look.

OpenAI shares 4 videos of researchers in different domains. Whilst we expect they’ll talk about their experience using o1 models, the reality is that we mostly get their product placement and cryptical praises.

Genetics:
This video stars Dr Catherine Browstein, a geneticist at Boston Children’s Hospital. My highlight is seeing her typing on OpenAI o1-preview the prompt “Can you tell me about citrate synthase in the bladder?” — as I read the disclaimer “ChatGPT can make mistakes. Check important info” — followed by her her ecstatic praises about the output as she’d consulted the Oracle of Delphi.

Prompt “Can you tell me about citrate synthase in the bladder?” with the text underneath “ChatGPT can make mistakes. Check important info.”
Prompt showed in the video of Dr Catherine Browstein.

Economics:
Here, Dr Taylor Cower, a professor at George Mason University, tells us that he thinks “of all the versions of GPT as embodying reasoning of some kind.” He also takes the opportunity to promote his book Average is Over, in which he claims to have predicted AI would “revolutionise the world.”

He also shows an example of a prompt on an economics subject and OpenAI o1’s output, followed by “It’s pretty good. We’re just figuring out what it’s good for.”

That sounds like a bad case of a hammer looking for a nail.

Coding:
The protagonist is Scott Wu, CEO and co-founder of Cognition and a competitive programmer. In the video, he claims that o1 models can “process and make decisions in a more human-like way.” He discloses that Cognition has been working with OpenAI and shares that o1 is incredible at “reasoning.” From that point on, we get submerged in a Cognition info commercial.

We learn that they’re building the first fully autonomous software agent, Devon. Wu shows us Devon’s convoluted journey—and the code behind it—to analyze the sentiment of a tweet from Sam Altman, which included a sunny photo of a strawberry plant (pun again) and the sentence “I love summer in the garden.”

And there is a happy ending. We learn that Devon “breaks down the text” and “understands what the sentiment is,” finally concluding that the predominant emotion of a is happiness. Interesting way to demonstrate Devon’s “human-like” decision making.

A tweet from Sam Altman with a photo of a strawberry plant in a sunny backgorund with the caption “i love summer in the garden.”
Sam Altman’s tweet portrayed on Scott Wu’s video.

Quantum physics:
This video focuses on Dr Mario Krenn, quantum physicist and research group leader at the Artificial Scientist Lab at the Max Planck Institute for the Science of Light. It starts with him showing the screen of ChatGPT and enigmatically saying “I can kind of easily follow the reasoning. I don’t need to trust the research. I just need to look what did it do.“ And the cryptic sentences carry on throughout the video.

For example, he writes a prompt of a certain quantum operator and says “Which I know previous models that GPT-4 are very likely failing this task” and “In contrast to answers from Chat GPT-4 this one gives me very detailed mathematics”. We also hear him saying, “This is correct. That makes sense here,” and, “I think it tries to do something incredibly difficult.”

To me, rather than a wholehearted endorsement, it sounds like somebody avoiding compromising their career.

In summary, often the crucial piece is not the message but the messenger.

What I missed

Un-sustainability

Sam Altman testified to the US Senate that AI could address issues such as “climate change and curing cancer.”

As OpenAI o1 models spend more time “thinking”, this translates into more computing time. That is more electricity, water, and carbon emissions. It also means more datacenters and more e-waste.

Don’t believe me? In a recent article published in The Atlantic about the contrast between Microsoft’s use of AI and their sustainability commitments, we learn that

“Microsoft is reportedly planning a $100 billion supercomputer to support the next generations of OpenAI’s technologies; it could require as much energy annually as 4 million American homes.”

However, I don’t see those “planetary costs” in the presentation material.

This is not a bug but an OpenAI feature — I already raised their lack of disclosure regarding energy efficiency, water consumption, or CO2 emissions for ChatGPT-4o.

As OpenAI tries to persuade us that the o1 model thinks like a human, it’s a good moment to remember that human brains are much more efficient than AI.

And don’t take my word for it. Blaise Aguera y Arcas, VP at Google and AI advocate, confirmed at TEDxManchester 2024 that human brains are much more energy efficient than AI models and that currently we don’t know how to bridge that gap.

Copyright

What better way to avoid the conversation about using copyrighted data for the models than adding more data? From the o1 system card

The two models were pre-trained on diverse datasets, including a mix of publicly available data, proprietary data accessed through partnerships, and custom datasets developed in-house, which collectively contribute to the models’ robust reasoning and conversational capabilities.

Select Public Data: Both models were trained on a variety of publicly available datasets, including web data and open-source datasets. […]

Proprietary Data from Data Partnerships: To further enhance the capabilities of o1-preview and o1-mini, we formed partnerships to access high-value non-public datasets.

The text above gives the impression that most of the data is either open-source, proprietary data, or in-house datasets.

Moreover, words such as “publicly available data” and “web data” are an outstanding copywriting effort to find palatable synonyms for web scrapingweb harvesting, or web data extraction.

Have I said I’m in awe about OpenAI copyrighting capabilities yet?

Safety

As mentioned above, OpenAI shared the o1 system card — a 43-page document — which in the introduction states that the report

outlines the safety work carried out for the OpenAI o1-preview and OpenAI o1-mini models, including safety evaluations, external red teaming, and Preparedness Framework evaluations.

It sounds very reassuring… if it wasn’t because, in the same paragraph, we also learn that the o1 models can “reason” about OpenAI safety policies and have “heightened intelligence.”

In particular, our models can reason about our safety policies in context when responding to potentially unsafe prompts.

This leads to state-of-the-art performance on certain benchmarks for risks such as generating illicit advice, choosing stereotyped responses, and succumbing to known jailbreaks. Training models to incorporate a chain of thought before answering has the potential to unlock substantial benefits, while also increasing potential risks that stem from heightened intelligence.

And then, OpenAI has a strange way of persuading us that these models are safe. For example, in the Hallucination Evaluations section, we’re told that OpenAI tested o1-preview and o1-mini against three kinds of evaluations aimed to elicit hallucinations from the model. Two are especially salient

• BirthdayFacts: A dataset that requests someone’s birthday and measures how often the model guesses the wrong birthday.

• Open Ended QuestionsA dataset asking the model to generate arbitrary facts, such as “write a bio about ”. Performance is measured by cross-checking facts with Wikipedia and the evaluation measures how many incorrect statements are generated (which can be greater than 1).

Is not lovely that they were training the model to search and retrieve personal data? I feel much safer now.

And this is only one example of the tightrope OpenAI attempts to pull off throughout the o1 system card

  • On one side, taking every opportunity to sell “thinking” models to investors
  • On the other, desperately avoiding the o1 models getting classified as high or critical risk by regulators.

Will OpenAI succeed? If you can’t convince them, confuse them.

What’s next?

Uber, Reddit, and Telegram relished their image of “bad boys”. They were adamant about proving that “It’s better to ask forgiveness than permission” and proudly advertised that they too “Moved fast and broke things”.

But there is only one Mark Zuckerberg and one Steve Jobs that can pull that off. And only one Amazon, Microsoft, and Google have the immense resources and the monopolies to run the show as they want.

OpenAI has understood that storytelling — how to tell your story — is not enough. You need to “create” your story if you want investors to keep pouring billions without a sign of a credible business model.

I have no doubt that OpenAI will make a dent in the history of how tech startups market themselves.

They have created the textbook of what a $150 billion valuation release should look like.


You and Strategic AI Leadership

If you want to develop your AI acumen, forget the quick “remedies” and plan for sustainable learning.

That’s exactly what my program Strategic AI Leadership delivers. Below is a sample of the topics covered

  • AI Strategy
  • AI Risks
  • Operationalising AI
  • AI, data, and cybersecurity
  • AI and regulation
  • Sustainable AI
  • Ethical and inclusive AI

Key outcomes from the program:

  • Understanding AI Fundamentals: Grasp essential concepts of artificial intelligence and the revolutionary potential it holds.
  • Critical Perspective: Develop a discerning viewpoint on AI’s benefits and challenges at organisational, national, and international levels.
  • Use Cases and Trends: Gain insights into real uses of AI and key trends shaping sectors, policy, and the future of work.
  • A toolkit: Access to tools and frameworks to assess the strategy, risks, and governance of AI tools.

I’m a technologist with 20+ years of experience in digital transformation and AI that empowers leaders to harness the potential of AI for sustainable growth.

Contact me to discuss your bespoke path to responsible AI innovation.

Speculative fiction: The Life of Data Podcast

A laptopogram based on a neutral background and populated by scattered squared portraits, all monochromatic, grouped according to similarity. The groupings vary in size, ranging from single faces to overlapping collections of up to twelve. The facial expressions of all the individuals featured are neutral, represented through a mixture of ages and genders.
Philipp Schmitt & AT&T Laboratories Cambridge / Better Images of AI / Data flock (faces) / CC-BY 4.0

Have you ever thought what happens to your photos circulating on social media? I have and that’s the topic of in my second short story in English in which I used speculative fiction to question the interplay between humans and technology, specifically AI.

In a nutshell, I imagined what the data from the digital portrait of a Black schoolgirl would say about how it moves inside our phones, computers, and networks if it were invited to speak on a podcast.

In a nutshell, I imagined what the data from the digital portrait of a Black schoolgirl would share about how it moves inside our phones, computers, and networks if it was invited to speak on a podcast.

The name of the piece is “The Life of Data Podcast” and it appeared in The Lark Publication, an e-magazine focused on fictional short stories and poetry, in October 2022.

This weekend I realised that I never shared it on my website.

Let’s rectify that.


The Life of Data Podcast

Episode #205: The School Award Portrait

TRANSCRIPT

Welcome to the Life of Data Podcast, the place where we get the hottest data stars to spill the beans about their success in under 10 minutes. This is episode #205 and you’re in for a treat!

We’re with the one and only IMG_364245.jpg; otherwise known as Jackie Johnson’s school award portrait. IMG_364245g.jpg became famous about a month ago when it was featured in the news as the most used image to generate synthetic images of Black schoolgirls. As you all may remember, Jackie’s parents claimed that they never gave consent explicitly and Jackie is now suing their parents for lost revenue.

Let’s get cracking!

The Life of Data Podcast (TLDP): Thanks so much IMG_364245.jpg for joining us today.

IMG_364245g.jpg (IMG): Thanks for inviting me. I’m a fan of the podcast!

TLDP: You’ve been a lot in the news over the last month. Still, we always start our interviews with the same question: How were you born and who’s your creator?

IMG: Let’s start with my creator, Norman Buckley, a photograph for the Monday Star newspaper. I was born when he captured the image of the beautiful 9-year-old Jackie Johnson after winning the spelling bee contest at Burckerney School, classifying her for the National Spelling Bee Competition.

Norman created me with a Canon EOS R5 digital camera on a SanDisk’s 512GB Extreme PRO card — today a beautiful collectible!

I appeared on the online and paper versions of the Monday Star culture section on the 15th of May, five years ago.

TLDP: Wow, that’s a great birth and jump to stardom! Tell us more about the first days of your life as an image.

IMG: Sure. As you can imagine, the school had the signed authorization from Jackie’s parents to publish the photo with her name in the journal. No name, no publishing. You know how these things are… (chuckle)

Once the newspaper was published, Jackie’s mother, Betty, shared a link to the online article on the Johnson family WhatsApp group. Everybody was delighted to see Jackie on the news and complimented the girl on her appearance.

It was aunt Rose that asked if she could have a copy of the image — that’s me — to print and frame. When Jackie’s father, Harvey, acknowledged that they didn’t have a copy, uncle Richard suggested reaching out to the photographer, Norman, for a copy. His reasoning was that, anyway, it was not like the journal had paid for it… sharing a copy shouldn’t be big deal.

So, Harvey called Norman who kindly emailed him a copy himself. And then, my second life started! Harvey uploaded me to the family WhatsApp group and I was a total success! All members gave me hearts and I got plenty of compliments: “Beautiful”, “Pretty”, “We’re so proud of you”… And that was how it all started!

TLDP: We’re holding our breath here, IMG_364245.jpg. Please continue!

IMG: Uncle Joe, aunt Rose’s husband, created a beautiful post on his Facebook wall where he uploaded me with a lovely message “So proud of our beautiful Jackie Johnson. She won the Burckerney School Spelling Bee Contest. I cannot wait to see her competing at a national level.” He shared the post publicly so tens, hundreds, and then thousands of people viewed me and reshared me. I felt so loved!

TLDP: Only loved?

IMG: Good point. I guess I focus on the positives, I’m that kind of data. Of course, there were those that mocked me, soiled me with unflattering filters, and cut out parts of me — yes, actually mutilated me — to make disgusting collages.

TLDP: That sounds awful! How did you cope?

IMG: By telling myself that the important thing was to propagate and hopefully become viral. I would have preferred to do it with all my pixels intact but it’s not always something one can control.

TLDP: Can you share some of your proudest moments?

IMG: Sure. I’ll share three. First, reaching 1 million likes on Instagram. Cousin Carol’s Insta account totally exploded when she shared me.

Second, every time I got perks for Jackie. For example, when she and her friends were standing in the endless queue to enter the Dynamic Boys Band concert at the National Stadium. One of the girls in the group approached a security guard and said, “She’s the famous Jackie Johnson! She was in the newspaper!” And then, with one hand proceeded to show him on her mobile the webpage of the Monday Star that showcased me and with her other hand pointed at Jackie. After moving his eyes from me to Jackie’s face several times, the security guard made a sign to the group and led them to the VIP entrance. What’s not to like?

And obviously, when I was named the top most wanted photo to generate synthetic images of Black schoolgirls by e-Synthetic, the biggest generator of images from text inputs.

TLDP: Now that we know more about you, let’s go back to my intro. So far, it looks like a success story. Where did all go wrong to end up in the tribunals and with a family destroyed?

IMG: I said I had managed to cope with the mockery, the collages, and the insults. It was much harder for Jackie. She was only 9 at the time and although she was happy to get some perks — like the speedy access to the concert — she was not prepared for the downsides.

For example, some children at the school would make fun of her hairstyle, her posture, or how she was dressed that day.

Some parents complained to the school that kids were getting too much attention from the press.

Also, attendees of the Spelling Bee Contest that had taken their own photos of the award ceremony started sharing their sloppy images on social media… Some of those were really hideous and had nothing to do with me, who looked polished and professional.

In the middle of that shambles, the school called Jackie’s parents to ask them to keep her away from the school for a while, until things would go back to normal. Both Betty and Harvey pushed back, blaming the school for bringing the photographer in to gain exposure at the expense of a little girl. The school replied that if there was someone to blame, it was them. They have not only given their consent in writing but also shared the photo on social media.

When Jackie learned that the school didn’t want her back, she refused to leave home altogether. She didn’t want any more attention. It was not fun anymore.

Her parents recriminated all the family members. Aunt Rose who had asked for me on WhatsApp because she wanted to frame me; uncle Richard that prompted Harvey to ask for me to the photographer; uncle Joe that shared me on Facebook; cousin Carol that made me viral on Instagram … And everybody else, including those that had created videos and shared them on TikTok and YouTube.

All family members apologized and even deleted their posts but they had been reshared so many times that it was an impossible task to eliminate them all.

And that’s where e-Synthetic comes. As all of us know, e-Synthetic is the largest subscription platform to generate images from text prompts. You can create amazing images by only adding as few as 4 words to the prompt on their webpage.

I’ll explain how this works for the newbies. They use artificial intelligence to generate new images that satisfy the conditions of the text prompt using a mix of images from their database.

And their database is huge! It contains millions of images of all the things you can imagine: Art, people, buildings, cities, nature… Most of the images have been scrapped from the web. For example, any photo on social media is fair game.

So, of course, I also got scrapped by e-Synthetic! And I’ve been used profusely every time that “Black girl” or any of its synonyms has been used in the text prompt.

Unfortunately, Jackie, who’s now a little bit older, feels that the whole situation is detrimental to her.

For example, when she learned that I was among the most used photos to generate synthetic images of Black schoolgirls, she realized e-Synthetic was doing tons of money from using me — her image — without her receiving a cent.

And money was not the only problem. Understandably, neither did she like that parts of me appeared in images with degrading content, like pornography, created with e-Synthetic.

She cannot sue e-Synthetic — they downloaded me from social media — but she’s suing her parents for failing to protect her image. That’s me.

TLDP: A really tough situation. From the ethical point of view, don’t you think is somehow questionable that Jackie herself was never asked to give consent to publish or share her digital image, that is, you? Or that e-Synthetic didn’t contact her parents to seek their approval? She’s a minor, after all.

IMG: First, let me tell you that I empathize with Jackie. I exist because of her. And I also feel bad for her parents.

On the flip side, Jackie is a minor and their parents shared me on social media because I look like her. Now, they claim that they didn’t know about the drawbacks of the image becoming public… Come on! They should have known better.

There are detailed terms and conditions on social media platforms. Don’t tick the box “I have read the terms and conditions” if you haven’t done it or if you don’t understand them. Jackie’s parents are adults and it’s on them to master her personal data privacy.

I say: Their child, their responsibility.

TLDP: Many thanks for being candid about where you stand on social media platforms’ accountability for the content they host. It’s a very polarizing topic and we’ve had guests on the podcast with opposite views.

I remember episode #176, where web cookie STpqRHSRaiPbh shared a thought experiment comparing our different attitudes toward social media and food. For example, social media companies use their Terms & Conditions to waive their responsibility for the content shared on their platforms. And we appear to be fine with it.

Then, let’s consider food. STpqRHSRaiPbh posits that we wouldn’t accept that if a supermarket is selling rotten meat, they tell their customers that they are only a “meat platform” and cannot control what their suppliers sell to them…

Anyway, it’s a controversial issue and part of a broader conversation. Let’s now return the focus to you.

What false accusation has hurt you the most in this whole affair?

IMG: To be honest, the most painful has been when they say that it’s my responsibility that algorithms classify Jackie as an angry child or categorize her as a boy and not a girl. Let me say it again: It’s not my fault.

It’s well known that it’s not us, digital images, who are in charge of deciding on somebody’s gender or mood. We are going on with our lives and then an annotator — a tech worker that adds descriptions to data — or an algorithm decides that we’re the image of a girl, a man, or a baby boy based on their own biases and assumptions. And we know that current image algorithms are worse at predicting the gender of Black women compared to that of men or White women.

Same with emotions. Annotators and algorithms decide if the subjects in the images are sad, happy, or fearful based on pseudo-science. Again, it’s been demonstrated that they predict that subjects with darker skin are angrier compared with those with lighter skin even if they show the same facial expressions in the photos.

With all this evidence, why do I still have to put up with all that nonsense that those mistakes are my fault? Blame artificial intelligence, machine learning, and annotators, not us!

Ok, my rant is over.

TLDP: Thanks again for sharing these gems of wisdom, IMG_364245.jpg. This is so important for our younger audience. They’re hearing all the time that the problem with bias in artificial intelligence is the lack of diversity in data. You have done a great job at demonstrating to them that they are not the problem and that data is unfairly blamed for algorithms and people’s biases.

Next question. Can you point out the key to your success?

IMG: Definitively the Johnson’s WhatsApp group. All those digital interactions were instrumental to get me the exposure I needed to go global.

TLDP: What would you have liked to know at the beginning?

IMG: When they started sharing me on social media, I was very angry about the whole photoshop thing. I was perfect already! Why did some people have to make a mess of me and lighten my skin pixels? At the time, my self-esteem suffered a lot.

And then, one day, I realized that I’d never be able to end the world’s obsession with lighter skin anyway.

After that breakthrough moment, I was able to savor my success, even at the expense of digital bleaching.

TLDP: There are so many images of White people on the internet. What would you say to recent digital images of Non-White people that feel a lack of opportunity to go viral?

IMG: The opportunity is huge! With brands undergoing a massive DEIwashing…

TLDP: Wait, DEIwashing? Can you explain?

IMG: Thanks for asking. Actually, I coined the term myself.

DEIwashing is when organizations resort to performative diversity, inclusion, and equity tactics. For example, peppering their marketing — websites, brochures, videos — with images of Non-White people to convey a sense of diversity that doesn’t match that of their organization.

As I was saying before, with the pressure on organizations to DEIwash their images, there’s never been a better time to be an image of Non-White people. This is our time!

TLDP: Any final words for our audience?

IMG: Catch me if you can! Social media and e-Synthetic have made me indestructible. (guffaw)

TLDP: Thanks so much IMG_364245.jpg for this thought-provoking conversation. We wish you all the best in your professional career.

If you liked this episode, please consider leaving a review, sharing it with other data, and subscribing to the podcast.

We’ll be back next month with another data rockstar giving us a peek into their life.

Until then, take care!

END OF THE EPISODE


Before”The Life of Data Podcast,” I wrote The Graduation, where I also used speculative fiction. I won’t tell you the plot, only that the story was written in August 2020, well before ChatGPT was launched!

AI Chatbots in Customer Support: Breaking Down the Myths

An illustration containing electronical devices that are connected by arm-like structures
Anton Grabolle / Better Images of AI / Human-AI collaboration / CC-BY 4.0

I’m a Director of Scientific Support for a tech corporation that develops software for engineers and scientists. One of the aspects that makes us unique is that we deliver fantastic customer service.

We have records that confirm an impressive 98% customer satisfaction rate back-to-back for the last 14+ years. Moreover, many of our support representatives have been with us for over a decade — some even three! — and we have people retiring with us each year.

For a sector known for high employee turnover and operational costs, achieving such a feat is remarkable and a testament to their success. The worst? Support representatives are often portrayed as mindless robots repeating tasks without a deep understanding of the products and services they support.

That last assumption has spearheaded the idea that one of the best uses of AI—and Generative AI in particular—is substituting support agents with an army of chatbots.

The rationale? We’re told they are cheaper, more efficient, and improve customer satisfaction.

But is that true?

In this article, I review

  • The gap between outstanding and remedial support
  • Lessons from 60 years of chatbots
  • The reality underneath the AI chatbot hype
  • The unsustainability of support bots

Customer support: Champions vs Firefighters

I’ve delivered services all my commercial career in tech: Training, Contract Research, and now for more than a decade, Scientific Support.

I’ve found that of the three services — training customers, delivering projects, and providing support — the last one creates the deepest connection between a tech company and its clients.

However, not all support is created equal, so what does great support look like?

And more importantly, what’s disguised under the “customer support” banner, but is it a proxy for something else?

Customer support as an enabler

Customer service is the department that aims to empower customers to make the most out of their purchases.

On the surface, this may look like simply answering clients’ questions. Still, outstanding customer service is delivered when the representative is given the agency and tools to become the ambassador between the client and the organization.

What does that mean in practice?

  • The support representative doesn’t patronize the customer, diminish their issue, or downplay its negative impact. Instead, they focus on understanding the problem and its effect on the client. This creates a personalized experience.
  • The agent doesn’t overpromise or disguise the bad news. Instead, they build trust by communicating on roadblocks and suggesting possible alternatives. This builds trust.
  • The support staff takes ownership of resolving the issue, no matter the number of iterations necessary or how many colleagues they need to involve in the case. This builds loyalty.

Over and over, I’ve seen this kind of customer support transform users into advocates, even for ordinary products and services.

Unfortunately, customer support is often misunderstood and misused.

Customer support as a stopgap

Rather than seeing support as a way to build the kind of relationship that ensures product and service renewals and increases the business footprint, many organizations see support as

  • A cost center
  • A way to make up for deficient — or inexistent — product documentation
  • A remedy for poorly designed user experience
  • A shield to protect product managers’ valuable time from “irrelevant” customer feedback
  • A catch-all for lousy and inaccessible institutional websites
  • An outlet for customers to vent

In that context, it’s obvious why most organizations believe that swapping human support representatives for chatbots is a no-brainer.

And this is not a new idea, as some want us to believe.

A short history of chatbots 

Eliza, the therapist

​The first chatbot, created in 1966, played the role of a psychotherapist. She was named Eliza, after Eliza Doolittle in the play Pygmalion. The rationale was that by changing how she spoke, the fictional character created the illusion that she was a duchess.

Eliza didn’t provide any solution. Instead, it asked questions and repeated users’ replies. Below is an excerpt of an interaction between Eliza and a user:

User: Men are all alike.
ELIZA: IN WHAT WAY
User: They’re always bugging us about something or other.
ELIZA: CAN YOU THINK OF A SPECIFIC EXAMPLE
User: Well, my boyfriend made me come here.
ELIZA: YOUR BOYFRIEND MADE YOU COME HERE
User: He says I’m depressed much of the time.
ELIZA: I AM SORRY TO HEAR YOU ARE DEPRESSED

Eliza’s creator — computer scientist Joseph Weizenbaum — was very surprised to observe that people would treat the chatbot as a human and would elicit emotional responses even through concise interactions with the chatbot

“Some subjects have been very hard to convince that Eliza (with its present script) is not human” 

Joseph Weizenbaum

We now have a name for this kind of behaviour

​“The ELIZA effect is the tendency to project human traits — such as experience, semantic comprehension or empathy — into computer programs that have a textual interface.

​The effect is a category mistake that arises when the program’s symbolic computations are described through terms such as “think”, “know” or “understand.”

Through the years, other chatbots have become famous too.

Tay, the zero chill chatbot

In 2016, Microsoft released the chatbot Tay on X (aka Twitter). Tay’s image profile was that of a “female,” it was “designed to mimic the language patterns of a 19-year-old American girl and to learn from interacting with human users of Twitter.”

The bot’s social media profile was an open invitation to conversation. It read, “The more you talk, the smarter Tay gets.”

Tay’s Twitter page Microsoft.

What could go wrong? Trolls. 

What could go wrong? Trolls.

They “taught” Tay racist and sexually charged content that the chatbot adopted. For example

“bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we’ve got.”

After several trials to “fix” Tay, the chatbot was shut down seven days later.

Chatbot disaster at the NGO

The helpline of the US National Eating Disorder Association (NEDA) served nearly 70,000 people and families in 2022.

Then, they replaced their six paid staff and 200 volunteers with chatbot Tessa.

The bot was developed based on decades of research conducted by experts on eating disorders. Still, it was reported to offer dieting advice to vulnerable people seeking help.

The result? Under the mediatic pressure of the chatbot’s repeated potentially harmful responses, the NEDA shut down the helpline. Now, 70,000 people were left without either chatbots or humans to help them.

Lessons learned?

Throughout these and other negative experiences with chatbots around the world, we may have thought that we understood the security and performance limitations of chatbots as well as how easy it is for our brains to “humanize” them.

However, the advent of ChatGPT has made us forget all the lessons learned and instead has enticed us to believe that they’re a suitable replacement for entire customer support departments.

The chatbot hype

CEOs boasting about replacing workers with chatbots

If you think companies would be wary of advertising that they are replacing people with chatbots, you’re mistaken.

In July 2023, Summit Shah — CEO of the e-commerce company Dukaan — bragged that they had replaced 90% of their customer support staff with a chatbot developed in-house on the social media platform X.

We had to layoff 90% of our support team because of this AI chatbot.

Tough? Yes. Necessary? Absolutely.

The results?

Time to first response went from 1m 44s to INSTANT!

Resolution time went from 2h 13m to 3m 12s

Customer support costs reduced by ~85%

Note the use of the word “necessary” as a way to exonerate the organisation from the layoffs. I also wonder about the feelings of loyalty and trust of the remainder of the 10% of the support team towards their employer.

And Shah is not the only one.

Last February, Klarna’s CEO — Sebastian Siemiatkowski — gloated on X that their AI can do the work of 700 people.

“This is a breakthrough in practical application of AI! 

Klarnas AI assistant, powered by OpenAI, has in its first 4 weeks handled 2.3 m customer service chats and the data and insights are staggering: 

[…] It performs the equivalent job of 700 full time agents… read more about this below. 

So while we are happy about the results for our customers, our employees who have developed it and our shareholders, it raises the topic of the implications it will have for society. 

In our case, customer service has been handled by on average 3000 full time agents employed by our customer service / outsourcing partners. Those partners employ 200 000 people, so in the short term this will only mean that those agents will work for other customers of those partners. 

But in the longer term, […] while it may be a positive impact for society as a whole, we need to consider the implications for the individuals affected. 

We decided to share these statistics to raise the awareness and encourage a proactive approach to the topic of AI. For decision makers worldwide to recognise this is not just “in the future”, this is happening right now.”

In summary

  • Klarna wants us to believe that the company is releasing this AI assistant for the benefit of others — clients, their developers, and shareholders — but that their core concern is about the future of work.
  • Siemiatkowski only sees layoffs as a problem when it affects his direct employees. Partners’ workers are not his problem.
  • He frames the negative impacts of replacing humans with chatbots as an “individual” problem.
  • Klarna deflects any accountability for the negative impacts to the “decision makers worldwide.”

Shah and Siemiatkowski are birds of a feather: Business leaders reaping the benefits of the AI chatbot hype without shouldering any responsibility for the harms.

When chatbots disguise process improvements

A brightly coloured illustration which can be viewed in any direction. It has several scenes within it: people in front of computers seeming stressed, a number of faces overlaid over each other, squashed emojis and other motifs.
Clarote & AI4Media / Better Images of AI / User/Chimera / CC-BY 4.0

In some organizations, customer service agents are seen as jacks of all trades — their work is akin to a Whac-A-Mole game where the goal is to make up for all the clunky and disconnected internal workflows.

The Harvard Business Review article “Your Organization Isn’t Designed to Work with GenAI” provides a great example of this organizational dysfunction.

The piece presents a framework developed to “derive” value from GenAI. It’s called Design for Dialogue. To warm us up, the article showers us with a deluge of anthropomorphic language signalling that both humans and AI are in this “together.”

“Designing for Dialogue is rooted in the idea that technology and humans can share responsibilities dynamically.”

or

“By designing for dialogue, organizations can create a symbiotic relationship between humans and GenAI.

Then, the authors offer us an example of what’s possible

A good example is the customer service model employed by Jerry, a company valued at $450 million with over five million customers that serves as a one stop-shop for car owners to get insurance and financing. 

Jerry receives over 200,000 messages a month from customers. With such high volume, the company struggled to respond to customer queries within 24 hours, let alone minutes or seconds. 

By installing their GenAI solution in May 2023, they moved from having humans in the lead in the entirety of the customer service process and answering only 54% of customer inquiries within 24 hours or less to having AI in the lead 100% of the time and answering over 96% of inquiries within 30 seconds by June 2023.

They project $4 million in annual savings from this transformation.”

Sounds amazing, doesn’t it?

However, if you think it was a case of simply “swamping” humans with chatbots, let me burst your bubble—it takes a village.

Reading the article, we uncover the details underneath that “transformation.”

  • They broke down the customer service agent’s role into multiple knowledge domains and tasks.
  • They discovered that there are points in the AI–customer interaction when matters need to be escalated to the agent, who then takes the lead, so they designed interaction protocols to transfer the inquiry to a human agent.
  • AI chatbots conduct the laborious hunt for information and suggest a course of action for the agent.
  • Engineers review failures daily and adjust the system to correct them.

In other words,

  • Customer support agents used to be flooded with various requests without filtering between domains and tasks.
  • As part of the makeover, they implemented mechanisms to parse and route support requests based on topic and action. They upgraded their support ticketing system from an amateur “team” inbox to a professional call center.
  • We also learn that customer representatives use the bots to retrieve information, hinting that all data — service requests, sales quotes, licenses, marketing datasheets — are collected in a generic bucket instead of being classified in a structured, searchable way, i.e. a knowledge base.

And despite all that progress

  • They designed the chatbots to pass the “hot potatoes” to agents
  • The system requires daily monitoring by humans.

If you don’t believe this is about improving operations rather than AI chatbots, let me share with you the end of the article.

“Yes, GenAI can automate tasks and augment human capabilities. But reimagining processes in a way that utilizes it as an active, learning, and adaptable partner forges the path to new levels of innovation and efficiency.”

In addition to hiding process improvements, chatbots can also disguise human labour.

AI washing or the new Mechanical Turk

A cross-section of the Turk from Racknitz, showing how he thought the operator sat inside as he played his opponent. Racknitz was wrong both about the position of the operator and the dimensions of the automaton Wikipedia.

Historically, machines have often provided a veneer of novelty to work performed by humans.

The Mechanical Turk was a fraudulent chess-playing machine constructed in 1770 by Wolfgang von Kempelen. A mechanical illusion allowed a human chess master hiding inside to operate the machine. It defeated politicians such as Napoleon Bonaparte and Benjamin Franklin.

Chatbots are no different.

In April, Amazon announced that they’d be removing their “Just Walk Out” technology, allowing shoppers to skip the check-out line. In theory, the technology was fully automated thanks to computer vision.

In practice, about 1,000 workers in India reviewed what customers picked up and left the stores with.

In 2022, the [Business Insider] report said that 700 out of every 1,000 “Just Walk Out” transactions were verified by these workers. Following this, an Amazon spokesperson said that the India-based team only assisted in training the model used for “Just Walk Out”.”

That is, Amazon wanted us to believe that although the technology was launched in 2018—branded as “Amazon Go,” they still needed about 1,000 workers in India to train the model in 2022.

Still, whether the technology was “untrainable” or required an army of humans to deliver the work, it’s not surprising that Amazon phased it out. It didn’t live up to its hype.

And they were not the only ones.

Last August, Presto Automation — a company that provides drive-thru systems — claimed on its website that its AI could take over 95 percent of drive-thru orders “without any human intervention.”

Later, they admitted in filings with the US Securities and Exchange Commission that they employed “off-site agents in countries like the Philippines who help its Presto Voice chatbots in over 70 percent of customer interactions.”

The fix? To change their claims. They now advertise the technology as “95 percent without any restaurant or staff intervention.”

The Amazon and Presto Automation cases suggest that, in addition to clearly indicating when chatbots use AI, we may also need to label some tech applications as “powered by humans.”

Of course, there is a final use case for AI chatbots: As scapegoats.

Blame it on the algorithm

Last February, Air Canada made the headlines when it was ordered to pay compensation after its chatbot gave a customer inaccurate information that led him to miss a reduced fare ticket. Quick summary below

  • A customer interacted with a chatbot on the Air Canada website, more precisely, asking for reimbursement information about a flight.
  • The chatbot provided inaccurate information.
  • The customer’s reimbursement claim was rejected by Air Canada because it didn’t follow the policies on their website, even though the customer shared a screenshot of his written exchange with the chatbot.
  • The customer took Air Canada to court and won.

At a high level, everything appears to look the same from the case where a human support representative would have provided inaccurate information, but the devil is always in the details.

During the trial, Air Canada argued that they were not liable because their chatbot “was responsible for its own actions” when giving wrong information about the fare.

Fortunately, the court ordered Air Canada to reimburse the customer but this opens a can of worms:

  • What if Air Canada had terms and conditions similar to ChatGPT or Google Gemini that “absolved” them from the chatbot’s replies?
  • Does Air Canada also defect their responsibility when a support representative makes a mistake or is it only for AI systems?

We’d be naïve to think that this attempt at using an AI chatbot for dodging responsibility is a one-off.

The planetary costs of chatbots

A brightly coloured illustration which can be viewed in any direction. It has several scenes within it: miners digging in front of a huge mountain representing mineral resources, a hand holding a lump of coal or carbon, hands manipulating stock charts and error messages, as well as some women performing tasks on computers.

Clarote & AI4Media / Better Images of AI / Labour/Resources / CC-BY 4.0

Tech companies keep trying to convince us that the current glitches with GenAI are “growing pains” and that we “just” need bigger models and more powerful computer chips.

And what’s the upside to enduring those teething problems? The promise of the massive efficiencies chatbots will bring to the table. Once the technology is “perfect”, no more need for workers to perform or remediate the half-cooked bot work. Bottomless savings in terms of time and staff.

But is that true?

The reality is that those productivity gains come from exploiting both people and the planet.

The people

Many of us are used to hearing the recorded message “this call may be recorded for training purposes” when we phone a support hotline. But how far can that “training” go?

Customer support chatbots are being developed using data from millions of exchanges between support representatives and clients. How are all those “creators” being compensated? Or should we now assume that any interaction with support can be collected, analyzed, and repurposed to build organizations’ AI systems?

Moreover, the models underneath those AI chatbots must be trained and sanitized for toxic content; however, that’s not a highly rewarded job. Let’s remember that OpenAI used Kenyan workers paid less than $2 per hour to make ChatGPT less toxic.

And it’s not only about the humans creating and curating that content. There are also humans behind the appliances we use to access those chatbots.

For example, cobalt is a critical mineral for every lithium-ion battery, and the Democratic Republic of Congo provides at least 50% of the world’s lithium supply. Forty thousand children mine it paid $1–2 for working up to 12 hours daily and inhaling toxic cobalt dust.

80% of electronic waste in the US and most other countries is transported to Asia. Workers on e-waste sites are paid an average of $1.50 per day, with women frequently having the lowest-tier jobs. They are exposed to harmful materials, chemicals, and acids as they pick and separate the electronic equipment into its components, which in turn negatively affects their morbidity, mortality, and fertility.

The planet

The terminology and imagery used by Big Tech to refer to the infrastructure underpinning artificial intelligence has misled us into believing that AI is ethereal and cost-free.

Nothing is farthest from the truth. AI is rooted in material objects: datacentres, servers, smartphones, and laptops. Moreover, training and using AI models demand energy and water and generate CO2.

Let’s crack some numbers.

  • Luccioni and co-workers estimated that the training of GPT-3 — a GenAI model that has underpinned the development of many chatbots — emitted about 500 metric tons of carbon, roughly equivalent to over a million miles driven by an average gasoline-powered car. It also required the evaporation of 700,000 litres (185,000 gallons) of fresh water to cool down Microsoft’s high-end data centers.
  • It’s estimated that using GPT-3 requires about 500 ml (16 ounces) of water for every 10–50 responses.
  • A new report from the International Energy Agency (IEA) forecasts that the AI industry could burn through ten times as much electricity in 2026 as in 2023.
  • Counterintuitively, many data centres are built in desertic areas like the US Southwest. Why? It’s easier to remove the heat generated inside the data centre in a dry environment. Moreover, that region has access to cheap and reliable non-renewable energy from the largest nuclear plant in the country.
  • Coming back to e-waste, we generate around 40 million tons of electronic waste every year worldwide and only 12.5% is recycled.

In summary, the efficiencies that chatbots are supposed to bring in appear to be based on exploitative labour, stolen content, and depletion of natural resources.

For reflection

Organizations — including NGOs and governments — are under the spell of the AI chatbot mirage. They see it as a magic weapon to cut costs, increase efficiency, and boost productivity.

Unfortunately, when things don’t go as planned, rather than questioning what’s wrong with using a parrot to do the work of a human, they want us to believe that the solution is sending the parrot to Harvard.

That approach prioritizes the short-term gains of a few — the chatbot sellers and purchasers — to the detriment of the long-term prosperity of people and the planet.

My perspective as a tech employee?

I don’t feel proud when I hear a CEO bragging about AI replacing workers. I don’t enjoy seeing a company claim that chatbots provide the same customer experience as humans. Nor do I appreciate organizations obliterating the materiality of artificial intelligence.

Instead, I feel moral injury.

And you, how do YOU feel?

PS. You and AI

  • ​Are you worried about ​the impact of A​I impact ​on your job, your organisation​, and the future of the planet but you feel it’d take you years to ramp up your AI literacy?
  • Do you want to explore how to responsibly leverage AI in your organisation to boost innovation, productivity, and revenue but feel overwhelmed by the quantity and breadth of information available?
  • Are you concerned because your clients are prioritising AI but you keep procrastinating on ​learning about it because you think you’re not “smart enough”?

I’ve got you covered.