
Last week, more than 200 Year 9 students from across Sydney gathered at a new Gen AI workshop aimed at tackling the increasing challenges young people are facing online.
When asked about the biggest challenges and in what areas they think more education would be helpful in, 41% of students said recognising fake news or misinformation, while one-third (31%) admitted not being able to tell if content is made with GenAI.
Twenty-seven per cent of those surveyed as part of the Optus Digital Thumbprint Program during the workshop identified the biggest challenge they face with protecting their personal information online as AI scams and extortion.
Since 2013, the Optus Digital Thumbprint Program has been delivered to more than 1,140 schools and 670,000 students since 2013. Led in schools by dedicated facilitators, or delivered through virtual, digital interactive workshops, the program explores core principles around online safety, wellbeing and responsible technology use.
Dom Phelan, Optus Digital Thumbprint Program Facilitator, said digital literacy skills, such as those we teach in our Digital Thumbprint workshops, are extremely important to help determine if content is AI-generated.
“For example, fact checking across multiple sources, looking for signs such as pixelated or blurry content, voices not quite matching facial movement etc,” Phelan told The Educator.
“A lot of schools I speak to have introduced AI literacy programmes that feature modules tailored to their areas of study and for senior students, their career aspirations, so it’s great to see educators embed this rapidly growing technology into classroom learning.”
Phelan said he is also often impressed how many students already use AI and understand how AI works.
“But schools are now also getting students to not only use it for studying but also critically assessing AI’s strengths, weaknesses and limitations, and how the technology collects data and how their student’s data is used to train their models,” he said.
“Students also need to be able to judge the implications of its use, question assumptions, assess reliability and identify potential algorithmic bias, risks and benefits. They must also demonstrate an awareness of the ethical issues and privacy and security concerns.”
When asked how exposure to deepfakes impacts students’ mental health and trust in digital information, and what supports schools should consider to address these impacts, Phelan said education, reporting, planning are three key areas.
“First of all, educating students on these challenges and where possible, parents too whether that’s through webinars or sharing of resources such as our Digital Thumbprint Gen AI guide,” he said. “We also need to normalise help seeking and reporting.”
Phelan said schools should encourage their students and their families to report any AI driven scams and deepfakes to the online platform and to the eSafety Commissioner and/or the police.
“It no longer becomes a school matter outside of school hours and Parents need to understand that they can make a report on behalf of their child if needed – but this has a huge detrimental effect on the mental health of our students,” he said.
“If a student’s image is used in a deepfake and spread on social media. The humiliation is magnified 1,000 times.”
Phelan said it’s all about helping students understand empathy and compassion.
There are so many resources on the Digital Thumbprint & eSafety websites that can help students and their families and the schools deal with these devastating effects,” he said.
“Lastly, planning how to manage these incidents is critical. A couple of years ago I’d hardly hear of any deep fakes of students or teachers being shared. This is happening a lot more and I think the proliferation of things like ‘nudify’ apps are making it far easier to create harmful content.”