Home News The ‘AI revolution’ has reached an inflection point

The ‘AI revolution’ has reached an inflection point

by


The ‘AI revolution’ has reached an inflection point

From classrooms and living rooms to boardrooms and factories, AI continues to be used to automate tasks quickly and efficiently, freeing up valuable time and resources. Its almost universal appeal as a personal assistant has seen the industry grow to truly dizzying valuations in recent years.

In 2020, Berkshire Hathaway company Businesswire estimated that the global market for AI was roughly US$46.9bn. Five years on, a study by Fortune Business Insights estimates the global AI market to be worth more than US$294bn – a staggering 527% jump in value.

According to a report summarised by NASDAQ, Dell’Oro Group, the top ten tech firms globally are expected to spend more than $1tn on data centers by 2028. Meanwhile, Goldman Sachs forecasts a 160% increase in global data centre power demand by 2030, pushing data centres’ share of total global power consumption from 1–2% today to 3–4% by the end of this decade.

However, it’s worth noting that the sheer hype surrounding AI has followed a familiar pattern best described by the Dunning–Kruger Effect: when something feels new and exciting, we often overestimate both our understanding of it and its potential to transform the world overnight.

First, it’s worth understanding what AI is – and what it is not.

What actually is AI?

While ‘AI’ is the tech buzzword of the decade, it is a term that is often used loosely and without a thorough understanding of what AI actually is.

Put simply, AI refers to machines doing tasks that look human-smart, using algorithms and data, but they don’t think like humans do. The controversial term “intelligence” when talking about AI is more about convenience – and as some experts point out – marketing than a true mirror of human mind.

In a 2024 book, Dr Emily M. Bender, a Professor of Linguistics at the University of Washington, and Dr Alex Hanna, Director of Research at the Distributed AI Research Institute (DAIR) and a former senior research scientist on Google’s Ethical AI team, wrote that “AI is a marketing term and does not refer to a coherent set of technologies.”

The question should therefore be asked: does this technology (still in its relative infancy) really justify the mass displacement of workers worldwide?

In their book ‘The AI Con: How to fight Big Tech’s influence and create the future we want’, the co-authors note that while the Luddites of 19th-century England protested machines that threatened their jobs and communities, they were not against technology per se.

“Some Luddites, weavers in particular, were into technologies that helped evaluate the quality of their work, for instance, being able to count the number of threads per inch, such that they could fetch a higher price at the market,” Dr Bender and Dr Hanna wrote.

“Luddites were instead against technologies of control and coercion, and concerned about the loss of jobs, health, and community.”

In 2025, one can see clear parallels to the pushback against the mass adoption of AI in industries that are replacing human workers with automated machines. Fortunately, this trend may be short-lived, with a recent MIT study finding that despite $40bn in enterprise investment, a staggering 95% of generative AI pilots failed to show measurable returns.

Look before you leap

Starting in 2022, Swedish fintech company Klarna laid off around 700 customer-service staff in favour of AI tools, but in 2025 admitted the move led to lower service quality and customer frustration. In May this year, the company began actively re-hiring human workers to restore its support model.

Higher up the tech food chain, IBM also reversed course after automating several HR functions and cutting approximately 8,000 roles in 2023 with its “AskHR” platform. The tech giant found that while the automation handled routine tasks, it couldn’t replace human insight for complex issues. The company has since increased hiring in roles requiring human judgement.

In July 2025, CBA – Australia’s largest bank – announced it would cut 45 customer-service jobs after rolling out an AI voice bot, claiming it could reduce call volumes by around 2,000 calls per week. Weeks later, the bank later rehired the workers after the AI voice bot underperformed on key tasks.

Across multiple firms, layoffs attributed to AI are currently being quietly reversed or roles rehired, chiefly because AI tools are falling short of expectations. One report by organisational design and workforce planning platform Orgvue found that more than half (55%) of companies “regret AI-driven redundancies.”

Whispers of an ‘AI bubble’

While many continue to spruik AI and its supposedly imminent successor Artificial General Intelligence (AGI) from the mountaintops, investor sentiment around AI has noticeably cooled in recent months. Prominent voices such as Britain’s former Deputy Prime Minister Nick Clegg have warned that “unbelievable, crazy valuations” across the AI sector make a crash highly likely.

A recent article in the BBC also demonstrates how some of the heavyweights of the investing world are looking to cash in on the expectation that the AI bubble may soon pop.

“Hedge fund investor Michael Burry, who was played by actor Christian Bale in the 2015 film The Big Short – a film about traders who made millions from predicting the collapse of the U.S housing bubble in 2008 – has turned his attention to AI,” the BBC recently reported.

“Mr Burry’s company revealed it has bought financial products, called options (to the tune of $1.1bn), that will pay out if the price of AI-linked companies Nvidia and Palatir shares fall.”

In brief, a growing number of analysts are warning that AI company stocks are starting to drop or stall because the hype around them seems bigger than the real-world outcomes.

Where to from here?

For school leaders who integrated AI tools across their campuses but are not seeing the ROI they’d anticipated, there remains a sense of cautious optimism.

In Australian education, the attitude towards AI is markedly pragmatic, with the Federal Government introducing guardrails to ensure that AI is adopted and used thoughtfully, safely and with a focus on meaningful outcomes in the classroom.

According to the Digital Landscapes in Australian Schools report 2025, 78% of Australian schools surveyed reported active use of AI tools, with 20% planning to introduce or increase AI usage in the next year.

The trend towards AI adoption and use can also be seen more broadly, with a worldwide survey by Oxford University Press finding that 63% of teachers think digital resources – including AI-powered technology – have had ‘significant’ or ‘some’ positive impact on educational outcomes for their students.

“Speaking to teachers at a recent conference, it was clear how excited they are by the technology — mostly to help with lesson planning and teaching ideas. Yet, many don’t see it as reliable enough to be used by students alone, and wonder how its use will influence the teacher-student relationship,” Alexandra Tomescu, product specialist in Generative AI and Machine Learning at Oxford University Press, said in an op-ed supplied to The Educator.

However, Tomescu noted that teachers are experimenting with generative AI tools are finding them helpful but limited.

“For example, we spoke to a college Vice-Principal in Hong Kong who said: ‘On a scale of 1 to 10, I would give ChatGPT a score of 6: AI has provided a good start for us, but we need to rely on ourselves to reach full marks.’”.

Students shouldn’t fly solo with AI just yet

Dr Jim Webber, Chief Scientist at leading graph database and analytics company Neo4j, said the risks of Generative AI tech in classrooms are clear, especially in the secondary stages of K12 where students are expected to work with a reasonable level of independence.

“Unlike a search tool which can help uncover information, the rise of LLMs allows students to seemingly complete an assignment with very little effort,” Dr Webber told The Educator. “The fact that the answers provided by LLMs are so good [at least superficially] leaves no room for the student to interpret and understand.”

Dr Webber cautioned that not only does this risk perverting the grading system, but students miss out on the skills involved in developing their own trains of critical thought.

“While it might be argued that students could do this with copy-and-paste of existing Web articles, LLMs take it to the next level.”

Dr Webber says one way to tame generative AI is to train LLMs on curated, high-quality, structured data, leading to better performance and more human-like responses, making them more accurate and useful in tasks like translation, writing, and coding.

“This includes getting LLMs to assess work to see if they think it has been written by an LLM – or more prosaically, educators can spot overnight transformations in a student’s style,” he said.

“But it is a constant battle when you can ask ChatGPT to write in a given style, such as ‘Write an essay in the style of a year 12 student about Australia’s role in the Great War.’”

In addition, says Dr Webber, graph technology – a modern way of storing data as entities and their connections – can make LLMs less biased, more accurate, and better ‘behaved’.

“The risk of errors can be reduced when an LLM is trained on curated, high-quality, structured data.”



Source link

You may also like