Imagine a classroom where AI helps students learn more effectively and faculty have more time to focus on what they do best. That future is closer than you think, thanks to the rise of generative AI. Higher education professionals find themselves in a new world with these emerging tools and their capacity to dramatically change how everyone thinks and learns. There are crucial questions arising on the appropriateness of AI and their impact on thinking, creativity, and intellectual property.
Unlike many instructors, the students in our classrooms are “digital natives” (Prensky, 2001) who have known, used, and relied on technology their whole lives. Often, however, this leads to faulty assumptions about what these students know and can do. Although they are typically fluent and comfortable with technology tools, they are not always able to be metacognitive about how and why they use them and how that use can benefit or impede their learning.
Additionally, generative artificial intelligence cannot be viewed as just another technology “tool.” Its breadth of use is unlike any prior technological development. Unfettered and uncritical use of generative AI by students will certainly affect learning gains and outcomes. We cannot assume that students will learn how to ethically and critically use AI without overt instruction and modeling. As a result, responsibility lies with all instructors to consider how and when they can allow students to use AI in their coursework. Instructors should model what that looks like, what tools are most reliable, and how students can cite and acknowledge AI use. Only through discipline-specific modeling can students gain the “meta AI” ability to make future determinations about responsible use of the tools.
Initial data suggests that faculty and students are experiencing ambiguity and need training to fully understand AI (Petricini et al., 2024). Once faculty comprehend the possibilities of this technology, they can, in turn, model and teach appropriate use to their students. Similar to metacognition, where individuals monitor and manage their own cognitive processes, higher education has a responsibility to help students develop meta AI skills to monitor and manage AI usage. This responsibility presents both opportunities and concerns. As instructors navigate those with a balance of curiosity and humility, some examples may help.
Instructors can:
- Experiment with using AI as a tool to give student feedback. Upload assignment instructions, a rubric, and sample work, and ask AI to provide feedback.
- Explore different prompts and the range of output it generates.
- Think like a student to assess the appropriateness of the feedback and its value to student learning.
- Put the onus on the student. Ask students to explain a concept they are learning and to provide an example to an AI tool and allow the AI to provide feedback.
- Evaluate how well AI supports student understanding. Students can also share the output with faculty thus providing another opportunity for faculty to “see” student thinking and to provide feedback.
After experimenting, bring this experience into the classroom and discuss with students what you learned, where it can be helpful, and areas to avoid using AI. Overtly model how you used it and your evaluation of that use. When appropriate, consider creating a discipline-specific guide for fellow faculty and students on what generative AI is, when to use it, and its potential and drawbacks in your field of study.
Another avenue to developing meta AI skills is to view it from a lens of scholarly teaching. Are there aspects of your teaching that can be improved by its use? How can AI be used to save time and provide valuable information to support learning? AI can be integrated into large-enrollment courses to provide immediate and ongoing feedback on assignments that are extremely time-consuming for one professor to grade. In professional programs, AI can be designed to simulate patients, allowing students to practice initial communication skills prior to their first clinical experiences. In online courses, these tools can be created to simulate classmates, and online students can have real-time discussions or engage in other active learning techniques such as think, pair, share that were not possible before. Modeling how decisions are made regarding the use and integration of AI is a first step for novice users in developing meta ai skills.
Finally, it is imperative that faculty work actively to identify and address when students use AI tools in unauthorized ways. If an instructor disallows AI and students use it anyway with no comment or consequence, they receive the message that AI-generated work is undetectable and acceptable. Any feedback the instructor gives on that work is not feedback on the students’ knowledge and abilities, yet students can misinterpret that feedback to mean that the AI use was an appropriate and successful choice. This is counter to the goal of meta AI and helping students understand how to ethically and responsibly use AI without interfering with their own learning.
As faculty proceed through this new era of education, we must both challenge ourselves to learn, grow, and adapt to the possibilities of generative AI while also relying on what we know about learning and student success. To kickstart this journey, consider taking the following initial steps: Attend workshops or training sessions on AI technology, collaborate with colleagues to share insights and strategies, and integrate AI tools into your curriculum incrementally. As students move through the education process and into the world of work, they must develop intrinsic motivation and metacognitive skills to maximize their chances of success. Those metacognitive skills now must also embrace a meta AI ability so that our future professionals can harness the tools in responsible ways that don’t interfere with all they need to learn and know for themselves.
Kelly Ahuna, PhD, is the Director of the Office of Academic Integrity at the University at Buffalo (UB). Faculty for 20 years, Kelly first ran an undergraduate critical thinking program and later worked with graduate teacher candidates on curriculum and instruction. Her overarching interest in student and teacher success led her to the growing field of academic integrity work and her current role as the inaugural director at UB where she has developed policy and procedures, established a robust remediation process, and co-founded the ICAI Northeast Regional Consortium. Kelly holds a bachelor’s degree in English with secondary teacher certification from Dickinson College, a master’s degree in higher education administration from the University of Vermont, and a doctoral degree in sociology of education from UB.
Michael Kiener, PhD, CRC is a professor at Maryville University of St. Louis in their Clinical Mental Health Counseling program. For the past 10 years he has coordinated their Scholarship of Teaching and Learning Program, where faculty participate in a yearlong program with a goal of improved student learning. In 2012 and 2024 he received the Outstanding Faculty Award for faculty who best demonstrate excellence in the integration of teaching, scholarship and/or service. He has over thirty publications including a co-authored book on strength-based counseling and journal articles on career decision making, action research, counseling pedagogy, and active and dynamic learning strategies.
References
Petricini, T., Wu, C., & Zipf, S. (2024). Perceptions about artificial intelligence and chatGPT use by faculty and students. Transformative Dialogues: Teaching and Learning Journal, 17(2), 63-87. https://doi.org/10.26209/td2024vol17iss21825
Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon, 9(5) Retrieved from http://www.marcprensky.com/writing/Prensky%20- %20Digital%20Natives,%20Digital%20Immigrants%20-%20Part1.pdf