
By Matthew Esterman
Australian schools are now deep in the age of artificial intelligence, whether they planned to be or not. Teachers are using generative AI tools to reduce planning time, customise activities for different learners in their classroom, and explore many other new ways of working. Students are turning to AI for explanations, drafts, summaries and, in some cases, shortcuts that leave teachers guessing whose work they’re marking. Leaders are finding opportunities to make workflows more efficient and redirect staff effort to better and new initiatives.
With constant updates, new platforms, new capabilities and shifting focuses it has been extremely hard to find sure footing when it comes to AI in schools.
As one principal said to me recently: AI is already here, and it’s moving faster than our systems, policies and capability can keep up.
The promise is huge but so are the risks. Who doesn’t want to address the extreme workload issues currently felt by staff and leaders in schools? Who doesn’t want to address all the various and varying needs of students in our classrooms? But who, exactly, is responsible for AI in schools to drive innovation and who is responsible when things go wrong?
Much of my work begins in the teaching and learning space: what role does AI play right now in schools and the lives of students? How might we shift practice and assessment to meet the challenge? What practical strategies can we use today to see the potential of AI? But AI influences all aspects of school as a place of learning, a place of working and as a community.
This tension between enormous opportunity and real vulnerability is forcing schools to rethink their approach to governance, policy, risk and innovation all at once.
A growing number of school leaders are now referencing a practical, two-stage model from DecisionLine as a way to regain control: not to slow innovation down, but to give it a safer runway.
The opportunity: an extra pair of hands for every teacher
For exhausted staff, AI can feel like the first genuine relief in years. Early trials show teachers saving hours each week on planning, administration, and resource generation. Generative AI can help a teacher to:
- turn a syllabus dot point into a fully differentiated lesson sequence
- generate exemplars, rubrics or feedback instantly
- adapt readings for students with additional needs
- streamline communication and reporting
- summarise behaviour data or wellbeing trends
For students, the potential is equally compelling. AI used as a tutor or “after hours teacher” (a direct student quote) can break down complex ideas, give tailored practice questions, or simulate real-world scenarios. Done well, this can allow teachers to be incredibly responsive to the complexity of the modern classroom.
The risk: safety, integrity and trust on the line
But alongside and amongst this amazing potential are concerns that are real, and growing.
Leaders are reporting challenges with academic integrity, rapid spread of AI-generated misinformation, and students becoming overly dependent on their “black box” helpers. There are also urgent questions about child safety, privacy, data storage, deepfake threats, and the reliability of AI-generated content.
Hallucinations, biased outputs, privacy breaches, and misuse are already affecting schools more than many realise.
Students are also using AI in ways that don’t fit the “teaching and learning” or “IT systems” buckets into which AI use is often thrown. Right now in every school there will be a student with an AI boyfriend or girlfriend. They may not know they are using it in this way, but they are absolutely building relationships with machines well beyond a transactional request for an essay.
Many leaders admit they don’t yet have clear policies, trained staff, or a governance structure that keeps pace with emerging risks. An ASBA survey from early in 2025 indicated that only 25% of governing bodies had been briefed on AI. And as laws around AI tighten, schools know they will soon be expected to demonstrate far more oversight and documentation than they currently have.
What school boards and principals say they lack is not enthusiasm for AI’s potential, but a safe, structured way to adopt it without destroying students, staff or the school’s culture and reputation.
From uncertainty to action: A two-stage response
This is where DecisionLine’s two-stage approach has begun to resonate. Not because schools want another framework, but because they need a path out of uncertainty.
The model is straightforward:
- Stage 1 builds stability.
- Stage 2 enables safe adoption.
Leaders describe it less like a governance framework and more like finally having a map to guide them.
Stage 1: Build the Safety Net Before You Build the Future
The first stage – “Setting the Foundations” – addresses the problems schools are most worried about: unclear decision-making, inconsistent practice, and unknown risk exposure.
Schools begin with leadership alignment, ensuring boards and executives understand both the promise and the dangers. This is where many misunderstandings evaporate. Leaders discover just how widely AI is already used by teachers and students, and how exposed they may be without a clear stance.
Next comes establishing context: evaluating current staff usage, student behaviour, existing systems, and risk appetite. What emerges is often a confronting but necessary baseline.
Schools then appoint an AI Lead and cross-functional team. This is not to become “the AI people” as an exclusive club but to ensure governance, teaching, wellbeing, ICT, compliance and data management are all represented. This is where siloed decision-making begins to dissolve.
With the right people around the table, schools define an AI strategy, update relevant policies, deliver staff training, and assess data governance and cybersecurity vulnerabilities. These steps, taken together with strategic planning, risk analysis and stakeholder engagement, form what we refer to as the school’s AI Safety Capsule, which is a protective core that stabilises the organisation and prepares it for safe adoption.
Leaders who have implemented Stage 1 describe it as the moment the fear drops away. They know where the risks are, who is responsible for what, and what guardrails are in place.
Only then do they move.
Stage 2: Adopt AI with eyes wide open
Once the foundations are in place, schools shift from “holding on tight” to “moving with purpose.”
Stage 2 focuses on organisational adoption: identifying where AI can make the most positive difference (whether in learning, wellbeing or operations) and piloting those use cases under controlled conditions. Instead of blanket rollouts or enthusiastic chaos, schools conduct targeted trials.
- A Year 9 English team might test AI-assisted feedback.
- A wellbeing team might trial an early-alert system.
- An admin team might experiment with AI-powered workflows.
These pilots are monitored closely for impact, safety, accuracy, and alignment with school values. Only once a use case proves safe, effective and within the school’s risk tolerance does it progress to a wider rollout.
A deployment roadmap follows. This is something most schools have never had for technology. It outlines who will use what, when, with what training, and under what conditions. Evaluation and continuous improvement close the loop.
In short: AI adoption becomes measured, documented, responsible and repeatable in the back end; exciting, ambitious, contextual and empowering at the front end.
A turning point for schools
The story emerging from Australian schools is clear. AI is not the enemy. Nor is it the saviour.
It is a powerful new layer of the education landscape. It is one that demands the same level of governance and professional judgement as curriculum, assessment, finance or safety.
The risks are significant, but so are the opportunities. What schools need is not a pause, but a plan.
By beginning with Stage 1’s stabilising work, and progressing to Stage 2’s structured adoption, schools are finding a way to embrace AI without losing their footing. They are protecting students, empowering teachers, building trust with parents and preparing themselves for future regulation while still improving learning and reducing workload today.
For a sector used to unpredictable change, this two-stage model offers something precious: Confidence. Structure. And a path forward that feels genuinely doable.
If the last two years were about reacting to AI randomly, the next two will be about building the systems that let schools lead, not chase, the future.
If you want to learn how to take your next best step with AI, contact The Next Word here.
Matthew Esterman is the founder and director at The Next Word, a consultancy focused on training, support and strategy for schools and other organisations to take their use of AI to the next level. Esterman has more than 15 years working in schools and beyond as a leading voice in the thoughtful adoption of technology. He is a trained History teacher, with two masters degrees and who has made a significant contribution to professional learning in Australia and overseas.

