Tag Archives: #AI

AI Chatbots in Customer Support: Breaking Down the Myths

An illustration containing electronical devices that are connected by arm-like structures
Anton Grabolle / Better Images of AI / Human-AI collaboration / CC-BY 4.0

I’m a Director of Scientific Support for a tech corporation that develops software for engineers and scientists. One of the aspects that makes us unique is that we deliver fantastic customer service.

We have records that confirm an impressive 98% customer satisfaction rate back-to-back for the last 14+ years. Moreover, many of our support representatives have been with us for over a decade — some even three! — and we have people retiring with us each year.

For a sector known for high employee turnover and operational costs, achieving such a feat is remarkable and a testament to their success. The worst? Support representatives are often portrayed as mindless robots repeating tasks without a deep understanding of the products and services they support.

That last assumption has spearheaded the idea that one of the best uses of AI—and Generative AI in particular—is substituting support agents with an army of chatbots.

The rationale? We’re told they are cheaper, more efficient, and improve customer satisfaction.

But is that true?

In this article, I review

  • The gap between outstanding and remedial support
  • Lessons from 60 years of chatbots
  • The reality underneath the AI chatbot hype
  • The unsustainability of support bots

Customer support: Champions vs Firefighters

I’ve delivered services all my commercial career in tech: Training, Contract Research, and now for more than a decade, Scientific Support.

I’ve found that of the three services — training customers, delivering projects, and providing support — the last one creates the deepest connection between a tech company and its clients.

However, not all support is created equal, so what does great support look like?

And more importantly, what’s disguised under the “customer support” banner, but is it a proxy for something else?

Customer support as an enabler

Customer service is the department that aims to empower customers to make the most out of their purchases.

On the surface, this may look like simply answering clients’ questions. Still, outstanding customer service is delivered when the representative is given the agency and tools to become the ambassador between the client and the organization.

What does that mean in practice?

  • The support representative doesn’t patronize the customer, diminish their issue, or downplay its negative impact. Instead, they focus on understanding the problem and its effect on the client. This creates a personalized experience.
  • The agent doesn’t overpromise or disguise the bad news. Instead, they build trust by communicating on roadblocks and suggesting possible alternatives. This builds trust.
  • The support staff takes ownership of resolving the issue, no matter the number of iterations necessary or how many colleagues they need to involve in the case. This builds loyalty.

Over and over, I’ve seen this kind of customer support transform users into advocates, even for ordinary products and services.

Unfortunately, customer support is often misunderstood and misused.

Customer support as a stopgap

Rather than seeing support as a way to build the kind of relationship that ensures product and service renewals and increases the business footprint, many organizations see support as

  • A cost center
  • A way to make up for deficient — or inexistent — product documentation
  • A remedy for poorly designed user experience
  • A shield to protect product managers’ valuable time from “irrelevant” customer feedback
  • A catch-all for lousy and inaccessible institutional websites
  • An outlet for customers to vent

In that context, it’s obvious why most organizations believe that swapping human support representatives for chatbots is a no-brainer.

And this is not a new idea, as some want us to believe.

A short history of chatbots 

Eliza, the therapist

​The first chatbot, created in 1966, played the role of a psychotherapist. She was named Eliza, after Eliza Doolittle in the play Pygmalion. The rationale was that by changing how she spoke, the fictional character created the illusion that she was a duchess.

Eliza didn’t provide any solution. Instead, it asked questions and repeated users’ replies. Below is an excerpt of an interaction between Eliza and a user:

User: Men are all alike.
ELIZA: IN WHAT WAY
User: They’re always bugging us about something or other.
ELIZA: CAN YOU THINK OF A SPECIFIC EXAMPLE
User: Well, my boyfriend made me come here.
ELIZA: YOUR BOYFRIEND MADE YOU COME HERE
User: He says I’m depressed much of the time.
ELIZA: I AM SORRY TO HEAR YOU ARE DEPRESSED

Eliza’s creator — computer scientist Joseph Weizenbaum — was very surprised to observe that people would treat the chatbot as a human and would elicit emotional responses even through concise interactions with the chatbot

“Some subjects have been very hard to convince that Eliza (with its present script) is not human” 

Joseph Weizenbaum

We now have a name for this kind of behaviour

​“The ELIZA effect is the tendency to project human traits — such as experience, semantic comprehension or empathy — into computer programs that have a textual interface.

​The effect is a category mistake that arises when the program’s symbolic computations are described through terms such as “think”, “know” or “understand.”

Through the years, other chatbots have become famous too.

Tay, the zero chill chatbot

In 2016, Microsoft released the chatbot Tay on X (aka Twitter). Tay’s image profile was that of a “female,” it was “designed to mimic the language patterns of a 19-year-old American girl and to learn from interacting with human users of Twitter.”

The bot’s social media profile was an open invitation to conversation. It read, “The more you talk, the smarter Tay gets.”

Tay’s Twitter page Microsoft.

What could go wrong? Trolls. 

What could go wrong? Trolls.

They “taught” Tay racist and sexually charged content that the chatbot adopted. For example

“bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we’ve got.”

After several trials to “fix” Tay, the chatbot was shut down seven days later.

Chatbot disaster at the NGO

The helpline of the US National Eating Disorder Association (NEDA) served nearly 70,000 people and families in 2022.

Then, they replaced their six paid staff and 200 volunteers with chatbot Tessa.

The bot was developed based on decades of research conducted by experts on eating disorders. Still, it was reported to offer dieting advice to vulnerable people seeking help.

The result? Under the mediatic pressure of the chatbot’s repeated potentially harmful responses, the NEDA shut down the helpline. Now, 70,000 people were left without either chatbots or humans to help them.

Lessons learned?

Throughout these and other negative experiences with chatbots around the world, we may have thought that we understood the security and performance limitations of chatbots as well as how easy it is for our brains to “humanize” them.

However, the advent of ChatGPT has made us forget all the lessons learned and instead has enticed us to believe that they’re a suitable replacement for entire customer support departments.

The chatbot hype

CEOs boasting about replacing workers with chatbots

If you think companies would be wary of advertising that they are replacing people with chatbots, you’re mistaken.

In July 2023, Summit Shah — CEO of the e-commerce company Dukaan — bragged that they had replaced 90% of their customer support staff with a chatbot developed in-house on the social media platform X.

We had to layoff 90% of our support team because of this AI chatbot.

Tough? Yes. Necessary? Absolutely.

The results?

Time to first response went from 1m 44s to INSTANT!

Resolution time went from 2h 13m to 3m 12s

Customer support costs reduced by ~85%

Note the use of the word “necessary” as a way to exonerate the organisation from the layoffs. I also wonder about the feelings of loyalty and trust of the remainder of the 10% of the support team towards their employer.

And Shah is not the only one.

Last February, Klarna’s CEO — Sebastian Siemiatkowski — gloated on X that their AI can do the work of 700 people.

“This is a breakthrough in practical application of AI! 

Klarnas AI assistant, powered by OpenAI, has in its first 4 weeks handled 2.3 m customer service chats and the data and insights are staggering: 

[…] It performs the equivalent job of 700 full time agents… read more about this below. 

So while we are happy about the results for our customers, our employees who have developed it and our shareholders, it raises the topic of the implications it will have for society. 

In our case, customer service has been handled by on average 3000 full time agents employed by our customer service / outsourcing partners. Those partners employ 200 000 people, so in the short term this will only mean that those agents will work for other customers of those partners. 

But in the longer term, […] while it may be a positive impact for society as a whole, we need to consider the implications for the individuals affected. 

We decided to share these statistics to raise the awareness and encourage a proactive approach to the topic of AI. For decision makers worldwide to recognise this is not just “in the future”, this is happening right now.”

In summary

  • Klarna wants us to believe that the company is releasing this AI assistant for the benefit of others — clients, their developers, and shareholders — but that their core concern is about the future of work.
  • Siemiatkowski only sees layoffs as a problem when it affects his direct employees. Partners’ workers are not his problem.
  • He frames the negative impacts of replacing humans with chatbots as an “individual” problem.
  • Klarna deflects any accountability for the negative impacts to the “decision makers worldwide.”

Shah and Siemiatkowski are birds of a feather: Business leaders reaping the benefits of the AI chatbot hype without shouldering any responsibility for the harms.

When chatbots disguise process improvements

A brightly coloured illustration which can be viewed in any direction. It has several scenes within it: people in front of computers seeming stressed, a number of faces overlaid over each other, squashed emojis and other motifs.
Clarote & AI4Media / Better Images of AI / User/Chimera / CC-BY 4.0

In some organizations, customer service agents are seen as jacks of all trades — their work is akin to a Whac-A-Mole game where the goal is to make up for all the clunky and disconnected internal workflows.

The Harvard Business Review article “Your Organization Isn’t Designed to Work with GenAI” provides a great example of this organizational dysfunction.

The piece presents a framework developed to “derive” value from GenAI. It’s called Design for Dialogue. To warm us up, the article showers us with a deluge of anthropomorphic language signalling that both humans and AI are in this “together.”

“Designing for Dialogue is rooted in the idea that technology and humans can share responsibilities dynamically.”

or

“By designing for dialogue, organizations can create a symbiotic relationship between humans and GenAI.

Then, the authors offer us an example of what’s possible

A good example is the customer service model employed by Jerry, a company valued at $450 million with over five million customers that serves as a one stop-shop for car owners to get insurance and financing. 

Jerry receives over 200,000 messages a month from customers. With such high volume, the company struggled to respond to customer queries within 24 hours, let alone minutes or seconds. 

By installing their GenAI solution in May 2023, they moved from having humans in the lead in the entirety of the customer service process and answering only 54% of customer inquiries within 24 hours or less to having AI in the lead 100% of the time and answering over 96% of inquiries within 30 seconds by June 2023.

They project $4 million in annual savings from this transformation.”

Sounds amazing, doesn’t it?

However, if you think it was a case of simply “swamping” humans with chatbots, let me burst your bubble—it takes a village.

Reading the article, we uncover the details underneath that “transformation.”

  • They broke down the customer service agent’s role into multiple knowledge domains and tasks.
  • They discovered that there are points in the AI–customer interaction when matters need to be escalated to the agent, who then takes the lead, so they designed interaction protocols to transfer the inquiry to a human agent.
  • AI chatbots conduct the laborious hunt for information and suggest a course of action for the agent.
  • Engineers review failures daily and adjust the system to correct them.

In other words,

  • Customer support agents used to be flooded with various requests without filtering between domains and tasks.
  • As part of the makeover, they implemented mechanisms to parse and route support requests based on topic and action. They upgraded their support ticketing system from an amateur “team” inbox to a professional call center.
  • We also learn that customer representatives use the bots to retrieve information, hinting that all data — service requests, sales quotes, licenses, marketing datasheets — are collected in a generic bucket instead of being classified in a structured, searchable way, i.e. a knowledge base.

And despite all that progress

  • They designed the chatbots to pass the “hot potatoes” to agents
  • The system requires daily monitoring by humans.

If you don’t believe this is about improving operations rather than AI chatbots, let me share with you the end of the article.

“Yes, GenAI can automate tasks and augment human capabilities. But reimagining processes in a way that utilizes it as an active, learning, and adaptable partner forges the path to new levels of innovation and efficiency.”

In addition to hiding process improvements, chatbots can also disguise human labour.

AI washing or the new Mechanical Turk

A cross-section of the Turk from Racknitz, showing how he thought the operator sat inside as he played his opponent. Racknitz was wrong both about the position of the operator and the dimensions of the automaton Wikipedia.

Historically, machines have often provided a veneer of novelty to work performed by humans.

The Mechanical Turk was a fraudulent chess-playing machine constructed in 1770 by Wolfgang von Kempelen. A mechanical illusion allowed a human chess master hiding inside to operate the machine. It defeated politicians such as Napoleon Bonaparte and Benjamin Franklin.

Chatbots are no different.

In April, Amazon announced that they’d be removing their “Just Walk Out” technology, allowing shoppers to skip the check-out line. In theory, the technology was fully automated thanks to computer vision.

In practice, about 1,000 workers in India reviewed what customers picked up and left the stores with.

In 2022, the [Business Insider] report said that 700 out of every 1,000 “Just Walk Out” transactions were verified by these workers. Following this, an Amazon spokesperson said that the India-based team only assisted in training the model used for “Just Walk Out”.”

That is, Amazon wanted us to believe that although the technology was launched in 2018—branded as “Amazon Go,” they still needed about 1,000 workers in India to train the model in 2022.

Still, whether the technology was “untrainable” or required an army of humans to deliver the work, it’s not surprising that Amazon phased it out. It didn’t live up to its hype.

And they were not the only ones.

Last August, Presto Automation — a company that provides drive-thru systems — claimed on its website that its AI could take over 95 percent of drive-thru orders “without any human intervention.”

Later, they admitted in filings with the US Securities and Exchange Commission that they employed “off-site agents in countries like the Philippines who help its Presto Voice chatbots in over 70 percent of customer interactions.”

The fix? To change their claims. They now advertise the technology as “95 percent without any restaurant or staff intervention.”

The Amazon and Presto Automation cases suggest that, in addition to clearly indicating when chatbots use AI, we may also need to label some tech applications as “powered by humans.”

Of course, there is a final use case for AI chatbots: As scapegoats.

Blame it on the algorithm

Last February, Air Canada made the headlines when it was ordered to pay compensation after its chatbot gave a customer inaccurate information that led him to miss a reduced fare ticket. Quick summary below

  • A customer interacted with a chatbot on the Air Canada website, more precisely, asking for reimbursement information about a flight.
  • The chatbot provided inaccurate information.
  • The customer’s reimbursement claim was rejected by Air Canada because it didn’t follow the policies on their website, even though the customer shared a screenshot of his written exchange with the chatbot.
  • The customer took Air Canada to court and won.

At a high level, everything appears to look the same from the case where a human support representative would have provided inaccurate information, but the devil is always in the details.

During the trial, Air Canada argued that they were not liable because their chatbot “was responsible for its own actions” when giving wrong information about the fare.

Fortunately, the court ordered Air Canada to reimburse the customer but this opens a can of worms:

  • What if Air Canada had terms and conditions similar to ChatGPT or Google Gemini that “absolved” them from the chatbot’s replies?
  • Does Air Canada also defect their responsibility when a support representative makes a mistake or is it only for AI systems?

We’d be naïve to think that this attempt at using an AI chatbot for dodging responsibility is a one-off.

The planetary costs of chatbots

A brightly coloured illustration which can be viewed in any direction. It has several scenes within it: miners digging in front of a huge mountain representing mineral resources, a hand holding a lump of coal or carbon, hands manipulating stock charts and error messages, as well as some women performing tasks on computers.

Clarote & AI4Media / Better Images of AI / Labour/Resources / CC-BY 4.0

Tech companies keep trying to convince us that the current glitches with GenAI are “growing pains” and that we “just” need bigger models and more powerful computer chips.

And what’s the upside to enduring those teething problems? The promise of the massive efficiencies chatbots will bring to the table. Once the technology is “perfect”, no more need for workers to perform or remediate the half-cooked bot work. Bottomless savings in terms of time and staff.

But is that true?

The reality is that those productivity gains come from exploiting both people and the planet.

The people

Many of us are used to hearing the recorded message “this call may be recorded for training purposes” when we phone a support hotline. But how far can that “training” go?

Customer support chatbots are being developed using data from millions of exchanges between support representatives and clients. How are all those “creators” being compensated? Or should we now assume that any interaction with support can be collected, analyzed, and repurposed to build organizations’ AI systems?

Moreover, the models underneath those AI chatbots must be trained and sanitized for toxic content; however, that’s not a highly rewarded job. Let’s remember that OpenAI used Kenyan workers paid less than $2 per hour to make ChatGPT less toxic.

And it’s not only about the humans creating and curating that content. There are also humans behind the appliances we use to access those chatbots.

For example, cobalt is a critical mineral for every lithium-ion battery, and the Democratic Republic of Congo provides at least 50% of the world’s lithium supply. Forty thousand children mine it paid $1–2 for working up to 12 hours daily and inhaling toxic cobalt dust.

80% of electronic waste in the US and most other countries is transported to Asia. Workers on e-waste sites are paid an average of $1.50 per day, with women frequently having the lowest-tier jobs. They are exposed to harmful materials, chemicals, and acids as they pick and separate the electronic equipment into its components, which in turn negatively affects their morbidity, mortality, and fertility.

The planet

The terminology and imagery used by Big Tech to refer to the infrastructure underpinning artificial intelligence has misled us into believing that AI is ethereal and cost-free.

Nothing is farthest from the truth. AI is rooted in material objects: datacentres, servers, smartphones, and laptops. Moreover, training and using AI models demand energy and water and generate CO2.

Let’s crack some numbers.

  • Luccioni and co-workers estimated that the training of GPT-3 — a GenAI model that has underpinned the development of many chatbots — emitted about 500 metric tons of carbon, roughly equivalent to over a million miles driven by an average gasoline-powered car. It also required the evaporation of 700,000 litres (185,000 gallons) of fresh water to cool down Microsoft’s high-end data centers.
  • It’s estimated that using GPT-3 requires about 500 ml (16 ounces) of water for every 10–50 responses.
  • A new report from the International Energy Agency (IEA) forecasts that the AI industry could burn through ten times as much electricity in 2026 as in 2023.
  • Counterintuitively, many data centres are built in desertic areas like the US Southwest. Why? It’s easier to remove the heat generated inside the data centre in a dry environment. Moreover, that region has access to cheap and reliable non-renewable energy from the largest nuclear plant in the country.
  • Coming back to e-waste, we generate around 40 million tons of electronic waste every year worldwide and only 12.5% is recycled.

In summary, the efficiencies that chatbots are supposed to bring in appear to be based on exploitative labour, stolen content, and depletion of natural resources.

For reflection

Organizations — including NGOs and governments — are under the spell of the AI chatbot mirage. They see it as a magic weapon to cut costs, increase efficiency, and boost productivity.

Unfortunately, when things don’t go as planned, rather than questioning what’s wrong with using a parrot to do the work of a human, they want us to believe that the solution is sending the parrot to Harvard.

That approach prioritizes the short-term gains of a few — the chatbot sellers and purchasers — to the detriment of the long-term prosperity of people and the planet.

My perspective as a tech employee?

I don’t feel proud when I hear a CEO bragging about AI replacing workers. I don’t enjoy seeing a company claim that chatbots provide the same customer experience as humans. Nor do I appreciate organizations obliterating the materiality of artificial intelligence.

Instead, I feel moral injury.

And you, how do YOU feel?

PS. You and AI

  • ​Are you worried about ​the impact of A​I impact ​on your job, your organisation​, and the future of the planet but you feel it’d take you years to ramp up your AI literacy?
  • Do you want to explore how to responsibly leverage AI in your organisation to boost innovation, productivity, and revenue but feel overwhelmed by the quantity and breadth of information available?
  • Are you concerned because your clients are prioritising AI but you keep procrastinating on ​learning about it because you think you’re not “smart enough”?

I’ve got you covered.

Big Tech Can Clone Your Voice: A Technological Triumph or a Moral Tragedy?

A tic-tac-toe board with human faces as digital blocks, symbolizing how AI works on pre-existing, biased online data for information processing and decision-making
Amritha R Warrier & AI4Media / ​Better Images of AI​ / tic tac toe / CC-BY 4.0

On 29th March, OpenAI – the company that develops ChatGPT and other Generative AI tools – released a ​blog post​ sharing “lessons from a small-scale preview of Voice Engine, a model for creating custom voices.”

More precisely

“a model called Voice Engine, which uses text input and a single 15-second audio sample to generate natural-sounding speech that closely resembles the original speaker.”

They reassure us that

“We are taking a cautious and informed approach to a broader release due to the potential for synthetic voice misuse. We hope to start a dialogue on the responsible deployment of synthetic voices, and how society can adapt to these new capabilities.”

And they warn us that they’ll make the decision unilaterally

“Based on these conversations and the results of these small scale tests, we will make a more informed decision about whether and how to deploy this technology at scale.”

Let’s explore why we should all be concerned.

The Generative AI mirage

In their release, OpenAI tells us all the great applications of this new tool

  • Providing reading assistance
  • Translating content
  • Reaching global communities
  • Supporting people who are non-verbal
  • Helping patients recover their voice

Note for all those use cases, there are already alternatives that don’t have the downsides of recreating a voice clone.

We also learn that other organisations have been testing this capability successfully for a while now. The blog post assumes that we should trust OpenAI’s judgment implicitly. There is no supporting evidence detailing how those tests were run, what challenges were uncovered, and what mitigations were put in place as a consequence.

The caveat

But the most important information is at the end of the piece.

OpenAI warns us of what we should stop doing or start doing because of their “Voice Engine”

“Phasing out voice-based authentication as a security measure for accessing bank accounts and other sensitive information

Exploring policies to protect the use of individuals’ voices in AI

Educating the public in understanding the capabilities and limitations of AI technologies, including the possibility of deceptive AI content

Accelerating the development and adoption of techniques for tracking the origin of audiovisual content, so it’s always clear when you’re interacting with a real person or with an AI”

In summary, OpenAI has decided to develop a technology and plan to roll it out so they expect the rest of the world will adapt to it.

Techno-paternalism

To those of us who have been following OpenAI, the post announcing the development and active use of Voice Engine is not a bug but a feature.

Big Tech has a tradition of setting its own rules, denying accountability, and even refusing to cooperate with governments. Often, their defense has been that society either doesn’t understand the “big picture”, doesn’t deserve an explanation, or is stifling innovation by enacting the laws.

Some examples are

  • Microsoft — In 2001, ​U.S. government accused Microsoft​ of illegally monopolizing the web browser market for Windows. Microsoft claimed that “its attempts to “innovate” were under attack by rival companies jealous of its success.”
  • Apple — The ​Batterygate​ scandal affected people using iPhones in the 6, 6S, and 7 families. Customers complained that Apple had purposely slowed down their phones after they installed software updates to get them to buy a newer device. Apple countered that it was “a safety measure to keep the phones from shutting down when the battery got too low”.
  • Meta (Facebook) — After the ​Cambridge Analytica​ scandal was uncovered, exposing that the personal data of about 50 million Americans had been harvested and improperly shared with a political consultancy, it took Mark Zuckerberg 5 days to reappear. Interestingly, he chose to publish ​a post on Facebook​ as a form of apology. Note that he also ​refused three times ​the invitation to testify in front of members of the UK Parliament.
  • Google — Between ​50 to 80 percent ​of people searching for porn deepfakes find their way to the websites and tools to create the videos or images via search. For example, in July 2023, ​around 44%​ of visits to Mrdeepfakes.com were via Google. Still, the onus is on the victims to “clean” the internet — ​Google​ requires them to manually submit content removal requests with the offending URLs.
  • Amazon — They ​refused​ for years to acknowledge that their facial recognition algorithms to predict race and gender were biased against darker females. Instead of improving their algorithms, they chose to ​blame the auditor’s methodology​.

OpenAI is cut from the same cloth. They apparently believe that if they develop the applications, they are entitled to set the parameters about how to use them— or not — and even change their mind as they see fit.

Let’s take their stand on three paramount issues that show us the gap between their actions and their values.

Open source

Despite their name — OpenAI — and initially being created as a nonprofit, they’ve been notorious for their inconsistent ​open-source​ practices. Still, each release has appeared to be an opportunity to ​lecture us​ about why society is much better off by leaving it to them to decide how to gatekeep their applications.

For example, Ilya Sutskever, OpenAI’s chief scientist and co-founder, said about the ​release of GPT-4​ — not an open AI model — a year ago

“These models are very potent and they’re becoming more and more potent. At some point it will be quite easy, if one wanted, to cause a great deal of harm with those models. And as the capabilities get higher it makes sense that you don’t want want to disclose them.”

“If you believe, as we do, that at some point, AI — AGI — is going to be extremely, unbelievably potent, then it just does not make sense to open-source. It is a bad idea… I fully expect that in a few years it’s going to be completely obvious to everyone that open-sourcing AI is just not wise.”

However, the reluctant content suppliers for their models — artists, writers, journalists — don’t have the same rights to decide on the use of the material they have created. For example, let’s remember how Sam Altman shrugged off the claims of newspapers that OpenAI used their ​copyrighted material​ to train ChatGPT.

Safety

The release of Voice Engine comes from the same playbook that the unilateral decision to release their ​text-to-video model Sora​ to “red teamers” and “a number of visual artists, designers, and filmmakers“.

The blog post also gives us a high-level view of the safety measures that’ll be put in place

“For example, once in an OpenAI product, our text classifier will check and reject text input prompts that are in violation of our usage policies, like those that request extreme violence, sexual content, hateful imagery, celebrity likeness, or the IP of others.

We’ve also developed robust image classifiers that are used to review the frames of every video generated to help ensure that it adheres to our usage policies, before it’s shown to the user.”

Let’s remember that OpenAI used Kenyan workers on ​less than $2 per hour​ to make ChatGPT less toxic. Who’ll make Sora less toxic this time?

Moreover, who’ll decide where’s the line between “mild” violence — apparently permitted —and “extreme” violence?

Sustainability

For all their claims that their “​primary fiduciary duty is to humanity​” is then surprising their disregard for the environmental impact of their models.

Sam Altman has been actively talking to investors, including the United Arab Emirates government, to raise funds for a tech initiative that would boost the world’s chip-building capacity, expand its ability to power AI, and cost several trillion dollars.

An ​OpenAI spokeswoman​ said

“OpenAI has had productive discussions about increasing global infrastructure and supply chains for chips, energy and data centers — which are crucial for AI and other industries that rely on them”

But nothing is free in the universe. ​A study​ conducted by Dr. Sasha Luccioni — Researcher and Climate Lead at Hugging Face — showed that training the 176 billion parameter LLM BLOOM emits at least 25 metric tons of carbon equivalents.

In the article, the authors also estimated that the training of GPT-3 — a 175 billion parameter model — emitted about 500 metric tons of carbon, roughly equivalent to over a million miles driven by an average gasoline-powered car. Why such a difference? Because, unlike BLOOM, GPT-3 was trained using carbon-intensive energy sources like coal and natural gas.

And that doesn’t stop there. Dr. Luccioni conducted further studies on the emissions associated with ​10 popular Generative AI tasks​.

  • Generating 1,000 images was responsible for roughly as much carbon dioxide as driving the equivalent of 4.1 miles in an average gasoline-powered car.
  • The least carbon-intensive text generation model was responsible for as much CO2 as driving 0.0006 miles in a similar vehicle.
  • Using large generative models to create outputs was far more energy intensive than using smaller AI models tailored for specific tasks. For example, using a generative model to classify positive and negative movie reviews consumed around 30 times more energy than using a fine-tuned model created specifically for that task

Moreover, they discovered that the day-to-day emissions associated with using AI far exceeded the emissions from training large models.

And it’s not only emissions. The data centres where those models are trained and run need water as a refrigerant and in some cases as a source of electricity.

Professor Shaolei Ren from UC Riverside found that training GPT-3 in Microsoft’s high-end data centers can directly​ evaporate 700,000 liters​ (about 185,000 gallons) of fresh water. As for the use, Ren and his colleagues estimated that GPT-3 requires about 500 ml (16 ounces) of water for every 10–50 responses.

Four questions for our politicians

It’s time our politicians step up to the challenge of exercising stewardship of AI for the benefit of people and the planet.

I have four questions to get them going:

  • Why are you allowing OpenAI to make decisions unilaterally on technology that affects us all?
  • How can you shift from a reactive stand where you enable Big Tech like OpenAI to drive the regulation for technologies that impact key aspects of governance — from our individual rights to national cybersecurity — to becoming a proactive key player on decisions that impact society’s future?
  • How can you make Big Tech accountable for the environmental planetary costs?
  • How are you ensuring the public becomes digitally literate so they can develop their own informed views about the benefits and challenges of AI and other emergent technologies?

Back to you

How comfortable are you with OpenAI deciding on the use of Generative AI on behalf of humanity?

PS. You and AI

  • ​Are you worried about ​the impact of A​I impact ​on your job, your organisation​, and the future of the planet but you feel it’d take you years to ramp up your AI literacy?
  • Do you want to explore how to responsibly leverage AI in your organisation to boost innovation, productivity, and revenue but feel overwhelmed by the quantity and breadth of information available?
  • Are you concerned because your clients are prioritising AI but you keep procrastinating on ​learning about it because you think you’re not “smart enough”?

I’ve got you covered.

Artificial Intelligence: A new weapon to colonise the Global South

3D-printed figures who work at a computer in an anonymous environment. They are anonymized, almost de-humanized.
Max Gruber / Better Images of AI / Clickworker Abyss / CC-BY 4.0

The hype around idyllic tech workplaces that originated in Silicon Valley with tales of great pay, free food and Ping-Pong tables reaches a whole new level when we talk about artificial intelligence (AI). Tech companies that want to remain competitive court data-scientists and AI expert developers with six-figure salaries and perks that go from unlimited holidays, on-site gyms, and nap pods, to subsidising egg-freezing and IVF treatments. I am a director at a software company that develops AI applications so I have seen it firsthand. 

But I also spent 12 years in Venezuela so I am aware that AI workers there have very different stories to tell than their counterparts in the global North. And this North-South disparity in working conditions is repeated across the world and amplified to the point where in the South a large portion of them are gig workers on subsistence rates.

Image annotators

Take, for instance, the self-driven car industry. It seeks to substitute people at the wheel with algorithms that mimic human pattern recognition – yet it relies on intensive human labour.

Self-driven car algorithms need millions of high-quality images labelled by annotators – workers who assess and identify all the elements on each image. And the industry wants these annotated images at the lowest possible cost. Enter: annotators in the Global South. 

Annotators in Venezuela are paid an average of 90 cents an hour with some being paid as low as 11 cents/hour. The situation is similar for their counterparts in North Africa.

The injustice is not only about low pay but also in work conditions. Workers are under constant pressure because the data-labelling platforms have quota systems that remove annotators from projects if they fail to meet targets for the completion of tasks. The algorithms keep annotators bidding for new gigs day and night, because high-paying tasks may only last seconds on their screens before disappearing.

And annotators are not the only tech workers in the Global South making it possible for the Global North to reap the benefits of AI. 

Social media moderators

The impact of fake news on elections and conflicts has put pressure on tech big bosses to moderate social media content better. Their customary response has been to offer reassurances that they are working on improving the AI tools that parse content on their platforms. 

We frequently hear that AI algorithms can be deployed to remove the stream of depictions of violence and other disturbing content on the internet and social media. But algorithms can only do so much – platforms need human moderators to review content flagged by AI tools. So where do those people live and how much are they paid? 

Kenya is the headquarter of Facebook’s content moderation operation for sub-Saharan Africa. Its workers are paid as little as $1.50 an hour for watching deeply disturbing content, back-to-back.

Kenya is the headquarters of Facebook’s content moderation operation for sub-Saharan Africa. Its workers are paid as little as $1.50 an hour for watching deeply disturbing content, back-to-back, without the benefit of any “wellness” breaks or the right to unionise. Moreover, they have a 50-second target to make a decision on whether content should be taken down or not. Consistently taking longer to make the call leads to a dismissal.   

Still, moderation is not granted equally around the world. As the Mozilla Internet Health Report 2022 says: “although 90% of Facebook’s users live outside the US, only 13% of moderation hours were allocated to labelling and deleting misinformation in other countries in 2020.” And 11 out of the 12 countries leading the ranking of national Facebook audiences are part of the Global South. This is in line with prioritising user engagement over their safety.

Mining disasters

While AI is naturally associated with the virtual world, it is rooted in material objects: datacentres, servers, smartphones, and laptops. And these objects are dependent on materials that need to be taken from the earth with attendant risks to workers’ health, local communities, and the planet.

For example, cobalt is a critical component in every lithium-ion rechargeable battery used  in mobile phones, laptops and electric cars. The Democratic Republic of Congo provides 60% of the world’s cobalt supply which is mined by 40,000 children, according to UNICEF estimates. They are paid $1-2 for working up to 12 hours a day and inhaling toxic cobalt dust. 

Unfortunately, the Global North’s apathy towards tackling child labour in the cobalt supply chain means that electronic and car companies get away with maximising profit at the expense of risks to human rights and harm to miners related to their cobalt supply chain.

And one of the driest places on earth, the Atacama Desert in Chile, holds more than 40% of the world’s supply of lithium ore. Extracting lithium requires enormous quantities of water – some 2,500 litres for each kilo of the metal. As a result, freshwater is less accessible to the local communities, affecting farming and pastoral activities as well as harming the delicate ecosystem.

Guinea pigs

As well as taking advantage of lax protection of human rights and health to pick up cheap labour, tech companies look to the poor data privacy laws in the Global South to enable them to trial their AI products on people there.

Invasive AI applications are tested in Africa, taking advantage of the need for cash across the continent coupled with the low restrictions regarding data privacy. Examples include apps specialised in money lending – so-called Lendtechs. They use questionable methods such as collecting micro-behavioural data points to determine the credit-worthiness of the users in the region. 

Lack of regulation enables lenders to exploit the borrowers’ contacts on their phones to call their family and friends to prompt loan repayment.

Examples of such data points include: the number of selfies, games installed, and videos created and stored on phones, the typing and scrolling speed, or SMS data to build a credit score using proprietary and undisclosed algorithms. Lack of regulation enables lenders to exploit the borrowers’ contacts on their phones to call their family and friends to prompt loan repayment. Reports suggest that loan apps have plunged many Kenyans into deep debt and pushed some into divorce or suicide.

The human rights project NotMy.ai, has mapped 20 AI schemes led by Latin American governments that were seen as likely to stigmatise and criminalise the most vulnerable people. Some of the applications – like predictive policing – have already been banned in some regions of the US and Europe. Numerous such initiatives are linked to Global North software companies.

Among the projects, two are especially creepy. First, the rollout of a tech application across Argentina, Brazil, Colombia, and Chile that promises to forecast the likelihood of teenage pregnancy.

Among the projects, two are especially creepy. First, the rollout of a tech application across Argentina, Brazil, Colombia, and Chile that promises to forecast the likelihood of teenage pregnancy based on data such as age, ethnicity, country of origin, disability, and whether the subject’s home had hot water in the bathroom. Second, a Minority Report-inspired model deployed in Chile to predict a person’s lifetime possibility of having a criminal career correlated with age, gender, weapons registered, and family members with a criminal record that reports 37% of false positives. 

The future is already there

We in the Global North might naturally consider the Global South to have only a marginal involvement in the use and development of AI. The reality is that the exploitation of the Global South is crucial for the Global North to harness the benefits of AI and even manufacture AI hardware. 

The South provides cheap labour, natural resources, and poorly-regulated access to populations on whom tech firms can test new algorithms and resell failed applications. 

The North-South chasm in digital economies was summed up elegantly in a 2003 Economist piece by novelist William Gibson, who foresaw the World Wide Web in his 1984 novel Neuromancer. “The future is already here,” he declared, adding, “it’s just not evenly distributed.”

In truth, the exploitation and harm that goes with the development of AI demonstrates that it’s not just the future that is with us, out of time; but also the inhumanity of the colonial past.

NOTE: This article was published in The Mint Magazine.

PS. You and AI

  • ​Are you worried about ​the impact of A​I impact ​on your job, your organisation​, and the future of the planet but you feel it’d take you years to ramp up your AI literacy?
  • Do you want to explore how to responsibly leverage AI in your organisation to boost innovation, productivity, and revenue but feel overwhelmed by the quantity and breadth of information available?
  • Are you concerned because your clients are prioritising AI but you keep procrastinating on ​learning about it because you think you’re not “smart enough”?

I’ve got you covered.