Is the AI Bubble About to Burst and is that Such a Big Deal?

AI is becoming a very popular phrase in the Western World. It’s being talked about more and more in almost every single corporate and business conversation. And it doesn’t end there - almost every news piece now has an AI segment, or AI mention. Even government officials across the world strongly believe in the promise of AI.

I was recently invited to speak at a conference in Kazakhstan a couple of weeks ago. On the two occasions visiting the beautiful country rich in culture and history this year, I had the privilege of speaking with the two Ministers of Education, offering advice on how to implement safe and responsible AI in the country.

The Minister of Education, Zhuldyz Suleimenova requested a private meeting with me and a few other world-renowned AI experts - Kok-Sing Tang, Ryan Baker and Dan Fitzpatrick seeking advice on these 2 questions specifically:

  1. How to implement AI safely for students.

  2. How to allay the fears of teachers who believe AI will take their jobs, and parents who are worried about the effect AI has on their children.

These are important concerns which are critical to ensuring AI is beneficial when developed and deployed. Here are my thoughts I shared with the minister:

Governments should not feel helpless about the AI revolution. We all have agency. AI can be built in-country without heavy reliance on tech exports from the US or China. The knowledge and skillsets are available. AI is simply ML (machine learning) - it’s not rocket science. If national AI sovereignty is achieved, policy makers can develop safe and responsible AI policies, protecting students, teachers and their residents. Not only that, they can also develop AI solutions unique to specific problems in their country. This opens up numerous opportunities that enable problem-specific AI solutions, which could drive economic growth, job opportunities, and innovation in any country while protecting students and teachers.

We’re seeing more countries building AI models for their countries, outside the US and China. For example, Switzerland recently launched Apertus, a ‘transparent’ LLM, using ethical AI principles. Apertus includes many under-represented languages in LLMs such as Swiss German and Romansh.

Singapore also launched a LLM trained on southeastern languages called MERaLiON. It’s trained to focus on customer service and elderly care, and was developed with a governance framework and red teaming evaluation. Aka, it was built safely, using responsible AI principles - which is crucial to successful AI strategies.

Secondly, once a country starts gaining AI sovereignty, they can ensure they have the right laws and safeguards in place to protect their young and vulnerable people. They can also focus on building AI literacy programmes for students, parents, teachers and the civil society. Building nation-wide AI literacy programmes not only upskills and reskills the population, it helps build trust and confidence in their government and the technology.

Feeling complacent, complicit and helpless in the AI revolution will only lead to further AI dominance, harm and subjugation to higher powers that are working towards world dominance at the detriment of humanity (and the planet). It’s not too late to step up, take the bull by the horns and start working on country-specific AI solutions, while building sustainable AI chips, responsible and ethical AI applications, and relevant programmes to protect jobs, young people, and human citizens.

The possibility of the AI bubble bursting is now a real threat. AI companies valuations are quite inflated and the circular network of investor, customer, and supplier between OpenAI, Microsoft, Oracle, XAI and other players such as CoreWeave, Nebiius, Mistral, Nscale and so on, is causing growing concern among investors.

Of course, if you’re an investor and have invested heavily in tech stocks, you might want to consider diversifying your portfolio. The AI promise has been a longstanding topic, where AI winters, springs and summers over the years have proven that science can sometimes fail and that promises and optimism in scientific breakthroughs could occasionally represent fantasies and dreams from over-excited researchers and tech bros.

The not-so-recent talk of LLMs hitting a wall, and continuous scaling won’t improve LLMs is one we shouldn’t forget, as that’s gradually being proven. LLM hallucinations are not getting any better, rather they’re leading to AI psychosis and sycophancy driving further harm across society.

On the flip side, China is developing significantly cheaper, competitive and open-source AI models, while reducing their reliance on Nvidia’s chips. Today in China, taxis and taxi drivers are being replaced with robotaxis, as an example of their advancements in AI.I wonder what this famous cat thinks about that!I wonder what this famous cat thinks about that!

The growing promise of AGI in models such as GPT-5 now seems to be a joke. Take Yann Lecun’s recent plans to leave Meta to build world models as an example. Yann Lecun, considered one of the godfathers of AI has concluded that LLMs are a dead end in the pursuit of AGI. He actually says that cats are smarter than LLMs.

A gif image of a black cat filing it's nails

Gary Marcus, a renowned neuroscientist and AI scholar has been quite critical about LLMs right from time, highlighting the failures and the need for more intelligent, intuitive and safe AI systems.

So is the AI bubble about to burst? Probably. Is that such a big deal? Again probably, depending on how you look at it. Would it cause a recession? Maybe, maybe not. Financial experts are on both sides of the fence. Some say it’s a big deal, and some say it’s not.

What’s important, and what you should focus on is dissecting between the hype and the sales pitches from tech companies, especially those in Silicon Valley. Don’t get carried away by the hype. Sometimes it’s just a sales strategy. Sometimes it’s just excitement about something new (remember the dot.com era?), without a clear understanding of the technology.

Personally, if the bubble does burst, it means there’ll be a slow down in this rapidly developing technology, which will give everyone including policy makers time to analyse, evaluate, improve and develop safer, intelligent systems that are built to serve and better humanity, and not replace human intelligence.

And on that note - maybe, we should let the bubble burst.

Previous
Previous

Is AI an Enabler or Disruptor?

Next
Next

AI Business Transformation: Accelerating Growth Through Intelligent Innovation