A New Religion: 8 Signs AI Is Our New God

Twelve characters inspired by the twelve Chinese Zodiac animals are all gathered around a long square table. The characters each have a human like body, but their heads each represent different zodiac animals. On the table are various tools and machines related tp technology - like computers, hardrives, files, data charts, files and keyboards. The characters all seem to be engaging in conversation with one another. In the centre of the window is an old style Microsoft logo.
Yutong Liu / Joining the Table / Licenced by CC-BY 4.0.

Religion and technology have been in a love-hate relationship since their inception.

Sometimes, technology has been a tool of religion.

For example, many religious texts position God(s) as the uber-technologists.

In the beginning, God created the heavens and the earth.

Genesis 1:1

A God that throughout history has empowered their prophets and followers to learn and use in the name of religion

So God said to Noah, “I am going to put an end to all people […]. So make yourself an ark of cypress wood; make rooms in it and coat it with pitch inside and out. This is how you are to build it: The ark is to be three hundred cubits long, fifty cubits wide and thirty cubits high. Make a roof for it, leaving below the roof an opening one cubit[c] high all around. Put a door in the side of the ark and make lower, middle and upper decks.

Genesis 6:13–16

Other times, technology itself has been perceived as “God-like”.

Think about one of the most ancient technologies: Fire. Whilst today we claim to have mastered it, fire deities have a long tradition that spans time and location.

And there are many more that have elicited awe, powerlessness, or veneration. Electricity, the steam machine, IVF, cars…

However, that “God-like” feel typically faded away once the “magic” was replaced by exposing the natural laws governing the phenomena.

But AI has been a technology-as-religion game-changer. Its God-like status has been cemented with time — and recently in an exponential manner — rather than being discredited, like other technologies. After all, the field of AI research was founded at a workshop at Dartmouth College in 1956, almost 70 years ago.

So, how come AI has reversed the trend? By proactively becoming a religion.

Don’t believe me?

Let me walk you through 8 signs that we’ve already adopted AI as a religion.

Disclaimer: You’ll notice that the signs are very skewed towards a Christian view of religion. This is not because of a disregard for other creeds but I was raised as Catholic and lived in countries where the Christian faith was the most popular. I’ll welcome feedback from other religions.

Sign #1: The Promise of Paradise

When Eve and Adam are expelled from Paradise, God is very explicit about what they’ll be losing

To the woman he said,

“I will multiply your sufferings in childbirth;
with pain you shall bear your children.
You shall desire your husband,
but he shall lord it over you.”

To the man he said, […]

“Cursed be the soil because of you!
With effort you shall obtain food
all the days of your life. […]
You are dust,
and unto dust you shall return.”

Genesis 3: 16–19

This sets the quest for the promised paradise over thousands of years for many religions. That place where there is no more hunger, sickness, work, or even death.

Until AI arrived. Or more precisely, until Generative AI did.

And what does AI have to offer as paradise? Abundance.

Last year, Sam Altman — one of AI’s “high priests” — pontificated on X

AI is the promise of paradise on Earth, provided that we keep shovelling money, electricity, water, and chips at its development.

Sign #2: Infallibility or the Promise of Enlightenment

God’s infallibility is a concept in many religions, and some of their prophets and representatives have claimed it for themselves, too, to explain concepts further, settle arguments, or propose new ideas.

For example, when Catholic Popes speak “ex-cathedra”, they become infallible

when the Roman pontiff speaks ex cathedra, that is, when, in the exercise of his office as shepherd and teacher of all Christians, in virtue of his supreme apostolic authority, he defines a doctrine concerning faith or morals to be held by the whole Church, he possesses, by the divine assistance promised to him in blessed Peter, that infallibility which the divine Redeemer willed His Church to enjoy in defining doctrine concerning faith or morals.

Pope Pius IX

How has AI become infallible? Through chatbots. Generative AI is presented as the collector and remaker of “all human knowledge” — or at least the knowledge available on the internet.

Once upon a time, Wikipedia used to be that “repository” of knowledge. Disputes would be settled with a

“I’ve checked in Wikipedia and it says…”

Now, arguments are countered with a

“but ChatGPT says…”

The difference?

Women Leading with AIMaster the Tools, Shape the Future

A Virtual Group Program in Inclusive, Sustainable & Actionable AI for Women Leaders

Sign # 3: Prophets

Prophets are supposed to be intermediaries between God and humanity. They have existed in many cultures and religions throughout history, including the Mesopotamian religion, Zoroastrianism, Judaism, Christianity, Manichaeism, Islam, the Baháʼí Faith, and Thelema.

Moses is often touted as the greatest prophet of all time.

God also said to Moses, “Say to the Israelites, […]

‘The Lord, the God of your fathers — the God of Abraham, Isaac and Jacob — appeared to me and said: I have watched over you and have seen what has been done to you in Egypt. And I have promised to bring you up out of your misery in Egypt into the land of the Canaanites, Hittites, Amorites, Perizzites, Hivites and Jebusites — a land flowing with milk and honey.

Exodus 3:15–17

Without prophets, we wouldn’t learn about God(s)’ wishes and requests. It’s the same with AI. Not everybody has equal access to AI God. It’s only that now we call them Tech Evangelists, among other names.

What qualifies as an AI prophet?

  • They are considered “geniuses.” That is, strictly math and computing gurus — even if they never completed a related university certification, as in the case of Bill Gates, Mark Zuckerberg, or Sam Altman. If your knowledge is on topics such as sociology, philosophy, or language, we regret to inform you that you cannot reach a “prophet” status.
  • They’re perceived as actively involved in developing AI systems. If you focus on AI regulation or present harms caused by AI systems, chances are that you’re not seen as visionary “enough” to be considered a prophet.
  • They relentlessly preach about AI’s capacity for either virtue or evil.
  • Society entitles them to dictate policies, influence regulation, and rebuke current laws to support their vision of AI.

Sign #4: Prophecies

The value of prophets is in their prophecies, the messages that have been communicated to them by a supernatural entity.

Prophecies are a feature of many cultures and belief systems and usually contain divine will or law, or preternatural knowledge, for example, of future events.

They have helped humans to navigate the discomfort of day-to-day uncertainty for millennia. From providing relief from nature’s whims — when will it rain again? —  to guiding our war efforts — who will win? — to shaping our behaviour — they follow the Old Testament formula “Repent of sin X and turn to righteousness, otherwise consequence Y will occur.”

They have been crucial for religions, inspiring action, inaction, fear, joy, and even patience.

But Ahaz said, “I will not ask; I will not put the Lord to the test.”

Then Isaiah said, “Hear now, you house of David! Is it not enough to try the patience of humans? Will you try the patience of my God also? Therefore the Lord himself will give you a sign: The virgin will conceive and give birth to a son, and[e] will call him Immanuel.

Isaiah 7:12-14

And AI is no exception.

We’re continuously fed with prophecies about how AI will irrevocably change the future. Unlike sacred texts, we hear both utopian and dystopian prophecies of our future, solely dependent on how we embrace AI.

For example, Geoffrey Hinton — 2024 Nobel prize in physics for his work in AI — recently said that there is 10–20% chance AI will lead to human extinction in three decades.

But hold on. Yann LeCun, the chief AI scientist at Meta — and one of the three godfathers of AI with Hinton — has said that AI “could actually save humanity from extinction”.

The more extravagant the prediction about the future of AI, the more the media promotes it. Just like in everything else, extreme views that polarise people and get likes and comments win.

The losers? Those who resist the hype and call for nuance.

Sign #5: AGI Atheists

A key element of religions is to show how superior their followers are to those who don’t believe in God.

The fool says in his heart, “There is no God.”

Psalm 53:1

Using “fool” to refer to those who disagree with you is very clever if you can pull it off. You remove their credibility as a “knower”.

Most of the current hype around AI is linked to its promise of delivering AGI (Artificial General Intelligence), a “kind” of AI that will match human intelligence or even surpass it.

Who are the atheists? Those labelled as “AI sceptics” because they dare to question the mirage of AGI and want human accountability regarding the harms caused by AI systems now.

And AGI prophets are borrowing from the religion playbook to weaponise “foolishness” as a weapon against dissenters.

For example, downplaying the knowledge of those highlighting the baseless tropes about the nearly-there AGI or disregarding their concerns about algorithm bias as a lack of vision.

Sign #6: Faith

Faith is the ultimate test of religion. You must believe in what you cannot see or is against our understanding of the physical world, e.g. God’s existence, hell as punishment, or Jesus Christ’s resurrection.

By faith we understand that the universe was created by the word of God, so that what is seen was not made out of things that are visible.

Hebrews 11:3

Sustainability is an excellent example of how AI prophets want to test our AGI faith. As I mentioned in the article Sustainable AI, tech leaders such as Eric Schmidt (former Google CEO) and Sam Altman (OpenAI CEO) have disregarded concerns about AI’s sustainability, as AGI will supposedly solve them in the future.

My own opinion is that we’re not going to hit the climate goals anyway, because we’re not organized to do it.

Yes, the needs in this area will be a problem, but I’d rather bet on AI solving the problem than constraining it and having the problem.

Eric Schmidt

Although it will happen incrementally, astounding triumphs — fixing the climate, establishing a space colony, and the discovery of all of physics — will eventually become commonplace. With nearly-limitless intelligence and abundant energy — the ability to generate great ideas, and the ability to make them happen — we can do quite a lot.

Sam Altman

In summary, they’re asking us to sit down and relax whilst they relentlessly build more data centres (McKinsey predicts global demand for data centre capacity could almost triple by 2030) that deplete our natural resources and compromise the future of the planet for future generations.

We “just” need faith that somehow AI will find a way to fix climate change.

Piece of cake.

Sign #7: Organised Worship

One thing that religions with the largest number of believers have in common is that they have an organisational structure with precepts, hierarchies, and rites that maintain their status by reassuring followers that they are on the “right path”.

This righteousness and divine mandate are used as a shield when things go wrong (e.g. sexual abusesslaveryprostitution).

AI has its organised worship structure

  • Big Tech, which has seen how adding AI to their quarterly statements is the gift that keeps on giving extraordinary valuations.
  • Venture Capital, which by June 2024 had already invested $600 billion in AI and is still searching for all the revenue in 2025.
  • Tech startups, which rely on the tired – and rarely successful – hockey stick growth model that promises riches for those who relentlessly pursue hypergrowth at all costs.

This hierarchy is supported by lofty ideas such as

Don’t be evil.

Google

Ensure that artificial general intelligence — AI systems that are generally smarter than humans — benefits all of humanity.

OpenAI

Act for the global good.

Anthropic

Ideals that supposedly make justifiable

Sign #8: Patriarchy

History gives plenty of examples that confirm that to be a credible religion, it needs to be led by a group of wise men with strong opinions and ready to be violent, if necessary.

Jesus entered the temple courts and drove out all who were buying and selling there. He overturned the tables of the money changers and the benches of those selling doves.

Matthew 21:12

When Moses approached the camp and saw the calf and the dancing, his anger burned and he threw the tablets out of his hands, breaking them to pieces at the foot of the mountain.

Exodus 32:19

This goes hand in hand with religion downplaying women’s role as faith leaders. Let’s remember that, for example, you cannot yet be both a priest and a woman in the Catholic church in the 21st century. If this were any other institution, they would have been sued for sex discrimination but as it’s religion, it’s “ok”.

AI is no different.

The contributions of women to AI are regularly downplayed. In 2023, the now-infamous article “Who’s Who Behind the Dawn of the Modern Artificial Intelligence Movement” showcased 12 men. Not even one woman in the group.

The article prompted criticism right away and “counter-lists” of women who have been pivotal in AI development and uncovering its harms. Still, women are not seen as “AI visionaries”.

What’s worse, AI has been consistently weaponised as a misogyny tool and its harms disregarded as unconscious bias and blamed on the lack of diversity of datasets.

As with religion, the “AI faith” considers women useful as far as they conform to the rules, or they become an easy target for their attacks, but not as leaders.

Another AI Future Is Possible

We’ve been told — and retold — that women are behind in generative AI adoption. It has been linked to women being more likely to report not knowing how to use AI tools and lower confidence in their ability to use AI tools effectively, such as crafting queries or applying AI tools for tasks.

But the Harvard Business School working paper Global Evidence on Gender Gaps and Generative AI, which synthesised data from 18 studies covering more than 140,000 individuals worldwide, provided a much richer understanding of the gender divide in generative AI.

For example, women say they need training before they can benefit from ChatGPT compared to men and are more likely to perceive AI usage in coursework or assignments as unethical or equivalent to cheating.

Women are also more likely to have a negative attitude towards AI.

  • For example, women perceive lower productivity benefits of using generative AI at work and in job search.
  • They are also more likely to agree that chatbots should be prohibited in educational settings and that using chatbots goes against the purpose of education, as well as being more concerned about how generative AI will impact learning in the future.
  • Women are less likely to agree that chatbots can generate better results than they can on their own.
  • They are less likely to agree that chatbots can improve their language ability or to trust generative AI than traditional human-operated services in education and training, information, banking, health, and public policy services.

In summary, women correctly understand that AI is not “neutral” or a religion to be blindly adopted and prefer not to use it when they perceive it as unethical.

Mic drop.

What Can We Learn From Women?

When I shared the research above with my network, many rushed to want to “fix” women

  • Offering them more training.
  • Stressing the importance of “giving it a go” without previous knowledge.
  • Designing friendlier chatbot interfaces.

But these well-intentioned people were missing a key point. Those women are not “wrong”. All the opposite. They were acting responsibly.

They opted out of using a technology for which they didn’t see the benefit, weren’t appropriately trained on, or that clashed with their ethical values.

What’s wrong with that?

Upending The AI Power Asymmetry

AI is being sold wrapped in FOMO — Fear Of Missing Out.

If we don’t adopt AI, we’ll lose our jobs, critical scientific discoveries, and even the opportunity to be happy.

But is that true? And more importantly, does it serve us well?

Look at “physical” technologies — food, medicines, cars — they are regulated. We have lists of ingredients, clinical trials, and driving licenses to ensure we protect the public from misuse and that they’re handled responsibly.

Unfortunately, we have a lower bar for digital technology in general and AI in particular. Why is that? Because we sorely underestimate their reach and harms and we are seduced by their promises of productivity and dopamine hits.

Imagine if we had the foresight to regulate the internet 30 years ago, when Section 230 was enacted, providing immunity for online computer services; or social media in the 2010s, when platforms started to surge; or non-consensual sexual deepfakes in 2017, when the technology first made its appearance in Reddit.

Moreover, what if we had taught people how to interact with those technologies within an ethical and inclusive framework, rather than let them “learn by doing”?

Would that be such an awful alternative present? I posit that we’d feel more empowered by those technologies and less dependent on them.

Like the women in the study, I believe we deserve better.

It’s not too late to embed ethical and inclusive values when teaching AI and demand applications that truly provide value and minimise harms to people and the planet.

The future of inclusive AI is feminist. And you can be part of it.


WORK WITH ME

Women Leading with AI Training: Become a thought-leader of responsible AI in your organisation, community, or sector.

How does this article resonate with you?