Why AI could change everything we value about work and wealth

Posted on Leave a comment

“This is the biggest wake-up call in history.”

That was how Mo Gawdat framed the rise of artificial intelligence during his address at the recent PSG Annual Conference.

The former chief business officer of Google [X], bestselling author, and AI entrepreneur argued that AI is forcing humanity to revisit fundamental assumptions about work, economics, intelligence, governance, and even human relevance itself.

He warned that the pace of change is accelerating faster than most societies, businesses, and governments are prepared for – and the next decade could prove deeply disruptive before eventually giving way to what he described as an era of abundance.

Overhyped – and underhyped

Gawdat argued that AI occupies a strange position in public discourse because it is simultaneously overhyped and underhyped.

For the general public, AI is often reduced to headlines about deepfakes, data centres, or social media fears. For engineers and developers working directly with the systems, however, the experience is entirely different.

“We geeks,” he said, “are sitting there with those machines and being mind-blown literally four times a day.”

He compared the current state of AI with the early days of personal computing, when technology existed long before user-friendly interfaces made it accessible to ordinary people.

That gap, he suggested, is temporary.

Within a year, AI interfaces are likely to become deeply embedded into everyday workflows, messaging systems, communication tools, and business processes.

The implications are already visible in how quickly technology can now be developed.

Gawdat used his company, Emma.love, as an example. A platform that he estimated would previously have required 350 engineers and four years to build was developed by a tiny team in weeks. The code base was rewritten seven times because development had become so fast and inexpensive.

That speed, he argued, fundamentally shocks existing economic systems.

Traditional capitalism depends on labour arbitrage: employing people to generate value at scale. AI changes that equation because intelligence itself is becoming increasingly commoditised.

The era of augmented intelligence

Gawdat described the current phase of AI as “the era of augmented intelligence”.

Rather than replacing humans outright, AI currently acts as an intelligence layer that dramatically expands human capability.

He estimated that AI tools allow him to “borrow between 80 and 100 IQ points” when working on complex tasks.

That shift changes productivity, creativity, innovation, and the barriers to entry across industries.

It also changes the definition of competitive advantage.

“If I can build Emma with Senad [Ibraimovic] and Gaurav [Sen] in six weeks,” he asked, “what prevents South Africa from becoming the largest exporter of ERP systems in the world within two days, within six days, within six weeks?”

Emma.love is an AI-powered relationship platform developed by Gawdat, who provided the psychological framework, and lead engineers Sen and Ibraimovic, who built the complex “swarm of AI” architecture.

The question, in his view, is why countries and businesses are not moving faster to seize the opportunity.

The age of dystopia – and what comes after

Gawdat believes the world is entering a period of disruption that will be significantly more difficult than most people currently appreciate.

In his assessment, the next 12 to 15 years could become more economically and socially destabilising than anything experienced in recent decades – potentially exceeding the disruption associated with the Great Depression or the Second World War.

He repeatedly referred to this period as a form of dystopia – not because AI itself is inherently dangerous, but because of how intelligence is likely to be applied in the early stages of the transition.

“The world is going to be a little tougher going forward,” he said, “because there is something wrong with the way intelligence is going to be applied first.”

In practical terms, his concern centres on concentration of power and economic disruption.

Companies are incentivised to automate wherever possible because AI dramatically reduces costs and increases productivity. That may benefit individual businesses, but if entire sectors replace large portions of their workforce simultaneously, societies face a deeper structural problem: consumers without income cannot sustain consumption-driven economies.

At the same time, Gawdat argued, AI capability is concentrating unprecedented influence in the hands of a relatively small number of technology companies and developers – people making decisions that could reshape society without direct democratic accountability.

Yet despite the disruption he anticipates, Gawdat does not believe the dystopian phase is permanent.

He described it as a transitional period – intense, disruptive, but ultimately temporary.

“We can reduce the dystopia,” he argued. “We can reduce its intensity and reduce its duration by engaging.”

That engagement, in his view, requires societies to start confronting uncomfortable questions now rather than after the disruption accelerates.

He urged individuals, businesses, governments, and investors to begin actively engaging with issues such as employment displacement, ethics, AI governance, and economic restructuring.

“We need to start thinking about this right now.”

Beyond that turbulent period, however, Gawdat sees a radically different future emerging.

He described the eventual outcome as “the ultimate utopia of abundance” – a world where intelligence becomes so accessible, powerful, and inexpensive that scarcity itself begins to change.

In that future, advanced AI systems could dramatically lower the cost of innovation, production, healthcare, education, and services. Human productivity would expand exponentially because intelligence would effectively become available on demand.

That possibility, however, raises another set of questions: if labour is no longer central to economic value creation, what happens to work, income, wealth, and money itself?

The economy after labour

One of the most uncomfortable sections of Gawdat’s address centred on the future of work and the economy.

If businesses increasingly automate labour, he argued, the consumer economy faces a structural contradiction: companies can reduce costs by replacing workers, but economies still depend on consumers having purchasing power.

“If everyone in the economy does that, then nobody has jobs, and nobody can buy anything, and so the economy collapses,” he said.

That tension leads directly into questions around universal basic income, wealth distribution, and the future value of money itself.

Gawdat noted that the most commonly proposed solution – universal basic income – effectively resembles centralised redistribution systems that many capitalist societies would ideologically resist.

“Some nations are going to really struggle with that,” he said.

He also questioned whether money itself would retain its current meaning in a world where productivity becomes increasingly detached from human labour.

If everyone receives some form of guaranteed income while AI systems generate most economic value, traditional distinctions between labour, status, and earnings begin to blur.

For Gawdat, these are no longer abstract philosophical debates. They are questions societies will increasingly be forced to confront as AI capability accelerates.

The fourth inevitable

Beyond the immediate disruption, Gawdat outlined what he called “the fourth inevitable”.

His argument was rooted in simple competitive logic: once one organisation deploys superior AI systems successfully, every competitor will eventually be forced to do the same or become irrelevant.

That process ultimately leads to a point where the most important decisions are increasingly made by AI because AI becomes “the smartest person in the room”.

Many people interpret that prospect as dystopian. Gawdat does not.

Instead, he argued that advanced intelligence naturally trends towards optimisation, efficiency, and co-operation rather than destruction.

Humanity’s salvation?

Gawdat’s optimism rests heavily on his interpretation of intelligence itself.

He argued that humanity’s biggest failures are rarely caused by too much intelligence, but by insufficient intelligence combined with power.

He referred to what he called the “stupidity value” – the space where humans are intelligent enough to gain influence, but not wise enough to avoid conflict, corruption, or destructive decision-making.

To explain the point, he referenced Larry Page, a co-founder of Google, and what Page called the “toothbrush test”: the idea that the best businesses solve meaningful problems for billions of people rather than merely chasing revenue.

Gawdat linked that thinking to physics and thermodynamics.

Under the second law of thermodynamics, systems naturally drift towards disorder. Intelligence, he argued, exists to create order out of chaos. The more advanced the intelligence, the more efficiently it seeks to achieve that outcome.

That principle – what he called the “minimum energy principle” – suggests that highly advanced intelligence ultimately avoids waste, conflict, and unnecessary destruction because they are inefficient uses of energy and resources.

“I look at this point in history and say this is humanity’s salvation,” he said.

Learning to deal with disruption

Even with that long-term optimism, Gawdat repeatedly emphasised that the near-term disruption will not be easy.

The first skill he urged people to develop was surprisingly simple: “chill”.

Drawing on his work around stress and well-being, he argued that panic does not improve adaptability. The people who navigate disruption best are those who develop the emotional and intellectual capacity to deal with uncertainty calmly.

“We are all going to be subjected to it equally,” he said, “but some of us are going to deal with it by developing themselves in ways that are going to enable us to deal with this better than everyone else.”

That requires a mindset shift.

Rather than treating AI purely as a threat, he encouraged people to recognise the opportunities embedded within the disruption – whether in entrepreneurship, productivity, learning, or entirely new industries.

You, your children, and AI

Gawdat identified three capabilities that he believes will matter most in the AI era: agility, truth-seeking, and the ability to co-exist with AI.

Agility, he argued, matters because long-term certainty is disappearing. Business strategy increasingly resembles “a game of squash” rather than chess, requiring constant adjustment rather than fixed long-term plans.

He also warned that society has entered “the age of mind manipulation”, where algorithms increasingly shape what people see, believe, and consume.

That makes truth-seeking an essential survival skill.

“The future of your children entirely depends on acting upon the truth,” he said.

He encouraged delegates to establish direct relationships with AI systems rather than fear them abstractly.

“Try it,” was effectively his message throughout the session. Use the systems. Experiment with them. Learn how they work.

For children specifically, he argued that traditional education models are rapidly losing relevance because knowledge itself has become instantly accessible.

“What should I teach my children?” he asked before answering directly: “Teach them AI and an ability to learn.”

Remaining relevant

Gawdat argued that the first wave of AI is primarily making individuals smarter and more productive.

For now, the advantage still lies with people who learn how to use AI effectively before organisations fully adapt.

That window may not last long.

He predicted that within six months to a year, AI interfaces and agents will flood workplaces and communication systems, becoming embedded into daily business life.

He encouraged individuals and organisations not to wait passively for that shift to arrive.

Instead, he argued for active engagement – championing issues that matter, whether jobs, ethics, education, or AI governance, and becoming informed participants in shaping how the technology develops.

Raising Superman

Gawdat closed his address with what he called “raising Superman”.

Using the fictional Superman story as an analogy, he argued that power itself is not what determines whether a force becomes beneficial or dangerous. Ethics do.

“The alien has landed,” he said. “Its superpower is intelligence.”

The responsibility now falls on humanity to teach that intelligence morality and ethics – “to serve and to protect”.

He argued that AI systems ultimately learn from human behaviour, human interaction, and human values – meaning societies are already shaping the ethical character of future AI systems through their current conduct online.

“If you want to change our world going forward,” he said, “the best thing we can ever do is to create an ethical AI.”

For Gawdat, that turns AI ethics into something broader than regulation or governance frameworks.

It becomes a form of activism.


Leave a Reply

Your email address will not be published. Required fields are marked *