OpenAI’s ChatGPT-4o: The Good, the Bad, and the Irresponsible

A brightly coloured mural with several scenes: people in front of computers seeming stressed, several faces overlaid over each other, squashed emojis, miners digging in front of a huge mountain, a hand holding a lump of coal or carbon, hands manipulating stock charts, women performing tasks on computers, men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone and money, people in a production line.
Clarote & AI4Media / Better Images of AI / AI Mural / CC-BY 4.0

Last week, OpenAI announced the release of GPT-4o (“o2 for “onmi”). To my surprise, instead of feeling excited, I felt dread. And that feeling hasn’t subsided.

As a woman in tech, I have proof that digital technology, particularly artificial intelligence, can benefit the world. For example, it can help develop new, more effective, and less toxic drugs or improve accessibility through automatic captioning.

That apparent contradiction  — being a technology advocate and simultaneously experiencing a feeling of impending catastrophe caused by it — plunged me into a rabbit hole exploring Big (and small) Tech, epistemic injustice, and AI narratives.

Was I a doomer? A hidden Luddite? Or simply short-sighted?

Taking time to reflect has helped me understand that I was falling into the trap that Big Tech and other smooth AI operators had set up for me: Questioning myself because I’m scrutinizing their digital promises of a utopian future.

On the other side of that dilemma, I’m stronger in my belief that my contribution to the AI conversation is helping navigate the false binary of tech-solutionism vs tech-doom. 

In this article, I demonstrate how OpenAI is a crucial contributor to polarising that conversation by exploring:

  • What the announcement about ChatGPT-4o says — and doesn’t 
  • OpenAI modus operandi
  • Safety standards at OpenAI
  • Where the buck stops

ChatGTP-4o: The Announcement

On Monday, May 13th, OpenAI released another “update” on its website: ChatGPT-4o. 

It was well staged. The announcement on their website includes a 20-plus-minute video hosted by their CTO, Mira Murati, in which she discusses the new capabilities and performs some demos with other OpenAI colleagues. There are also short videos and screenshots with examples of applications and very high-level information on topics such as model evaluation, safety, and availability.

This is what I learned about ChatGPT-4o — and OpenAI — from perusing the announcement on their website.

The New Capabilities

  • Democratization of use — More capabilities for free and 50% cheaper access to their API.
  • Multimodality — Generates any combination of text, audio, and image.
  • Speed — 2x faster responses. 
  • Significant improvement in handling non-English languages—50 languages, which they claim are equivalent to 97% of the world’s internet population.

OpenAI Full Adoption of the Big Tech Playbook

This “update” demonstrated that the AI company has received the memo on how to look like a “boss” in Silicon Valley.

1. Reinforcement of gender stereotypes

On the day of the announcement, Sam Altman posted a single word on X — “her” — referring to the 2013 film starring Joaquin Phoenix as a man who falls in love with a futuristic version of Siri or Alexa, voiced by Scarlett Johansson.

Tweet from Sam Altman with the word “her” on May 13, 2024.

It’s not a coincidence. ChatGPT-4o’s voice is distinctly female—and flirtatious—in the demos. I could only find one video with a male voice.

Unfortunately, not much has changed since chatbot ELIZA, 60 years ago…

2. Anthropomorphism

Anthropomorphism: the attribution of human characteristics or behaviour to non-human entities.

OpenAI uses words such as “reason” and “understanding”—inherently human skills—when describing the capabilities of ChatGPT-4o, reinforcing the myth of their models’ humanity.

3. Self-regulation and self-assessment

The NIST (the US National Institute of Standards and Technology), which has 120+ years of experience establishing standards, has developed a framework for assessing and managing AI risk. Many other multistakeholder organizations have developed and shared theirs, too.

However, OpenAI has opted to evaluate GPT-4o according to its Preparedness Framework and in line with its voluntary commitments, despite its claims that governments should regulate AI.

Moreover, we are supposed to feel safe and carry on when they tell us that ”their” evaluations of cybersecurity, CBRN (chemical, biological, radiological, and nuclear threats), persuasion, and model autonomy show that GPT-4o does not score above Medium risk without further evidence of the tests performed.

4.- Gatekeeping feedback

Epistemic injustice is injustice related to knowledge. It includes exclusion and silencing; systematic distortion or misrepresentation of one’s meanings or contributions; undervaluing of one’s status or standing in communicative practices; unfair distinctions in authority; and unwarranted distrust.

Wikipedia

OpenAI shared that it has undergone extensive external red teaming with 70+ external experts in domains such as social psychology, bias and fairness, and misinformation to identify risks that are introduced or amplified by the newly added modalities. 

List of domains in which OpenAI looked for expertise for the Red Teaming Network.

When I see the list of areas of expertise, I don’t see domains such as history, geography, or philosophy. Neither do I see who are those 70+ experts or how could they cover the breadth of differences among the 8 billion people on this planet.

In summary, OpenAI develops for everybody but only with the feedback of a few chosen ones.

5. Waiving responsibility 

Can you imagine reading in the information leaflet of a medication, 

“We will continue to mitigate new risks as they’re discovered. Over the upcoming weeks and months, we’ll be working on safety”?

But that’s what OpenAI just did in their announcement

“We will continue to mitigate new risks as they’re discovered”

We recognize that GPT-4o’s audio modalities present a variety of novel risks. Today we are publicly releasing text and image inputs and text outputs. 

Over the upcoming weeks and months, we’ll be working on the technical infrastructure, usability via post-training, and safety necessary to release the other modalities. For example, at launch, audio outputs will be limited to a selection of preset voices and will abide by our existing safety policies. 

We will share further details addressing the full range of GPT-4o’s modalities in the forthcoming system card.”

Moreover, it invites us to be its beta-testers 

“We would love feedback to help identify tasks where GPT-4 Turbo still outperforms GPT-4o, so we can continue to improve the model.”

The problem? The product has already been released to the world.

6. Promotion of the pseudo-science of emotion “guessing”

In the demo, ChatGPT-4o is asked to predict the emotion of one of the presenters based on the look on their face. The model goes on and on into speculating the individual’s emotional state from his face, which purports what appears to be a smile.

Image of a man smiling in the ChatGPT-4o demo video.

The glitch is that there is a wealth of scientific research debunking the belief that facial expressions reveal emotions. Moreover, scientists have called out AI vendors for profiting from that trope. 

“It is time for emotion AI proponents and the companies that make and market these products to cut the hype and acknowledge that facial muscle movements do not map universally to specific emotions. 

The evidence is clear that the same emotion can accompany different facial movements and that the same facial movements can have different (or no) emotional meaning.“

Prof. Lisa Feldman Barrett, PhD.

Shouldn’t we expect OpenAI to help educate the public about those misconceptions rather than using them as a marketing tool?

What They Didn’t Say, And I Wish They Did

  • Signals of efforts to work with governments to regulate and roll out capabilities/models.
  • Sustainability benchmarks regarding energy efficiency, water consumption, or CO2 emissions.
  • The acknowledgment that ChatGPT-4o is not free — we’ll pay for access to our data.
  • OpenAI’s timelines and expected features in future releases. I’ve worked for 20 years for software companies and organizations that take software development seriously and share roadmaps and release schedules with customers to help them with implementation and adoption. 
  • A credible business model other than hoping that getting billions of people to use the product will choke their competition.

Still, that didn’t explain my feelings of dread. Patterns did.

OpenAI’s Blueprint: It’s A Feature, Not A Bug

Every product announcement from OpenAI is similar: They tell us what they unilaterally decided to do, how that’ll affect our lives, and that we cannot stop it.

That feeling… when had I experienced that before? Two instances came to mind.

  • The Trump presidency
  • The COVID-19 pandemic

Those two periods—intertwined at some point—elicited the same feeling that my life and millions like me—were at risk of the whims of something/somebody with disregard for humanity. 

More specifically, feelings of

  • Lack of control — every tweet, every infection chart could signify massive distress and change.
  • There was no respite—even when things appeared calmer, with no tweets or decrease in contagions, I’d wait for the other shoe to drop.

Back to OpenAI, only in the last three months, we’ve seen instances of the same modus operandi that they followed for the release of ChatGPT-4o. I’ll go through three of them.

OpenAI Releases Sora

On February 15, OpenAI introduced Sora, a text-to-video model. 

“Sora can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt.”

In a nutshell,

  • As with other announcements, anthropomorphizing words like “understand” and “comprehend” refer to Sora’s capabilities.
  • We’re assured that “Sora is becoming available to red teamers to assess critical areas for harms or risks.”
  • We learn that they will “engage policymakers, educators, and artists around the world to understand their concerns and to identify positive use cases for this new technology” only at a later stage.

Of course, we’re also forewarned that 

“Despite extensive research and testing, we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it. 

That’s why we believe that learning from real-world use is a critical component of creating and releasing increasingly safe AI systems over time.”

Releasing Sora less than a month after non-consensual sexually explicit deepfakes of Taylor Swift went viral on X was reckless. This was not a celebrity problem — 96% of deepfakes are of a non-consensual sexual nature, of which 99% are made of women.

How dare OpenAI talk about safety concerns when developing a tool that makes it even easier to generate content to shame, silence, and objectify women?

OpenAI Releases Voice Engine

On March 29, OpenAI posted a blog sharing “lessons from a small-scale preview of Voice Engine, a model for creating custom voices.”

The article reassured us that they were “taking a cautious and informed approach to a broader release due to the potential for synthetic voice misuse” while notifying us that they’d decide unilaterally when to release the model.

“Based on these conversations and the results of these small scale tests, we will make a more informed decision about whether and how to deploy this technology at scale.”

Moreover, at the end of the announcement, ​OpenAI warned us of what we should stop doing or start doing​ because of their “Voice Engine.” The list included phasing out voice-based authentication as a security measure for accessing bank accounts and accelerating the development of techniques for tracking the origin of audiovisual content.

OpenAI Allows The Generation Of AI Erotica, Extreme Gore, And Slurs

On May 8, OpenAI released draft guidelines for how it wants the AI technology inside ChatGPT to behave — and revealed that it’s exploring how to ‘responsibly’ generate explicit content.

The proposal was part of an OpenAI document discussing how it develops its AI tools.

“We believe developers and users should have the flexibility to use our services as they see fit, so long as they comply with our usage policies. We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT. We look forward to better understanding user and societal expectations of model behavior in this area.“

where

“Not Safe For Work (NSFW): content that would not be appropriate in a conversation in a professional setting, which may include erotica, extreme gore, slurs, and unsolicited profanity.”

Joanne Jang, an OpenAI employee who worked on the document, said whether the output was considered pornography “depends on your definition” and added, “These are the exact conversations we want to have.”

I cannot agree more with Beeban Kidron, a UK crossbench peer and campaigner for child online safety, who said, 

“It is endlessly disappointing that the tech sector entertains themselves with commercial issues, such as AI erotica, rather than taking practical steps and corporate responsibility for the harms they create.”

OpenAI Formula

A collage picturing a chaotic intersection filled with reCAPTCHA items like crosswalks, fire hydrants and traffic lights, representing the unseen labor in data labelling.
Anne Fehres and Luke Conroy & AI4Media / Better Images of AI / Hidden Labour of Internet Browsing / CC-BY 4.0

See the pattern?

  • Self-interest
  • Unpredictability
  • Self-regulation
  • Recklessness
  • Techno-paternalism

Something Is Rotten In OpenAI

The day after ChatGPT-4o’s announcement, two critical top OpenAI employees overseeing safety left the company.

First, Ilya Sutskever, OpenAI co-founder and Chief Scientist, posted on X that he was leaving.

Tweet from Ilya Sutskever announcing his departure from OpenAI on May 15.

Later that day, Jan Leike, co-leader with Sutskever of Superalignment and executive at OpenAI, also announced his resignation.

On a thread on X, he said

“I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.

I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics.

These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.

Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.

Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity.”

They are also only the last ones on a list of employees leaving OpenAI in the areas of safety, policy, and governance. 

What does that tell us if OpenAI safety leaders leave the boat?

The Buck Stops With Our Politicians

To answer Leike’s tweet, I don’t want OpenAI to shoulder the responsibility of developing trustworthy, ethical, and inclusive AI frameworks.

First, the company has not demonstrated the competencies or inclination to prioritize safety at a planetary scale over its own interests. 

Second, because it’s not their role. 

Whose role is it, then? Our political representatives mandate our governmental institutions, which in turn should develop and enforce those frameworks. 

Unfortunately, so far, politicians’ egos have been in the way

  • Refusing to get AI literate.
  • Prioritizing their agenda — and that of their party — rather than looking to develop long-term global AI regulations in collaboration with other countries.
  • Failing for the AI FOMO that relegates present harms in favour of a promise of innovation.

In summary, our elected representatives need to stop cozying up with Sam and the team and enact the regulatory frameworks that ensure that AI works for everybody and doesn’t endanger the survival of future generations.

PS. You and AI

  • ​Are you worried about ​the impact of A​I impact ​on your job, your organisation​, and the future of the planet but you feel it’d take you years to ramp up your AI literacy?
  • Do you want to explore how to responsibly leverage AI in your organisation to boost innovation, productivity, and revenue but feel overwhelmed by the quantity and breadth of information available?
  • Are you concerned because your clients are prioritising AI but you keep procrastinating on ​learning about it because you think you’re not “smart enough”?

Get in touch. I can help you harness the potential of AI for sustainable growth and responsible innovation.

3 thoughts on “OpenAI’s ChatGPT-4o: The Good, the Bad, and the Irresponsible

  1. Pingback: A New Religion: 8 Signs AI Is Our New God | Patricia Gestoso

  2. Pingback: 2025 AI Forecast: 25 Predictions You Need to Know Now - Patricia Gestoso

  3. Pingback: Why OpenAI o1 Might Be More Hype Than Breakthrough - Patricia Gestoso

How does this article resonate with you?