There always has been longstanding debate in education about what skill sets students should learn and how their learning should be assessed. The advent of AI, however, has created an immediate need to reexamine current educational models and methods. Like it or not, AI now is replacing much of educators’ traditional focus on teaching and assessing students, including by generating content, synthesizing ideas into coherent essays, critiquing writing, and even solving complex problems. Now many educators are at a loss about which skill sets should be the focus of their instruction, and about how they should assess students’ attainment of those skills.
Rethinking Assessment in the Age of AI
Howard’s Gardner’s theory of Multiple Intelligences (MI), which was first published in his 1983 book Frames of Mind¸ was one of the first theories to question traditional methods that focus solely on teaching and assessing basic cognitive abilities. Gardner’s theory proposes that individuals are born with diverse, independent abilities, which include linguistic, logical-mathematical, visual-spatial, body-kinesthetic, musical, interpersonal, intrapersonal, and naturalistic intelligences. His theory holds significant educational and assessment implications. It departs from traditional notions of intelligence and assessment methods that focus primarily on teaching and evaluating learners on cognitive capabilities that can be neatly and narrowly measured with standardized tests. Instead, Gardner’s theory encourages educators to diversify their teaching goals, approaches, and assessment methods to cater to a variety of student abilities within a classroom.
At a time when AI is replacing many basic human cognitive capabilities, Gardner’s theory and its implications are more relevant now than ever before. More specifically, there is even more of a need today to focus on equipping students with skill sets that AI cannot (as of yet) replace. These include higher order critical thinking and practical application of creative, reflective, social and ethical skills. The question then becomes: How do we educators equip and assess students on such skill sets and attributes, especially since they cannot be measured in conventional standardized ways?
According to Desai (2025), one possible solution is to focus instruction and assessment on the process of students’ thinking vs. the end results of what students produce. In other words, if AI can generate essays and solutions, the focus of educators needs to shift from evaluating their final products to equipping and assessing students on critical thinking, self-regulation, and motivational skills. Effective assessment techniques need to provide insight into a learner’s thought processes in real time.
Structured Think-Alouds as a Research-Backed Solution
Think-aloud protocols, whereby students vocalize their thoughts and actions as they perform a learning task, is one effective way to assess students’ thinking skills. A 2012 study that I conducted with Dr. Linnea Ehri demonstrated that think-alouds can go beyond revealing a learner’s thought processes by being structured in a way to facilitate students’ development of important metacognitive, self-regulation, and critical thinking skills – all of which are essential in in the current landscape of AI. In our study, we randomly assigned 70 college students to a structured think-aloud condition, or a control-group, non-structured, think-aloud condition, and asked both groups to perform an online vocabulary learning task.
Students in the structured think-aloud methodology were asked to continually think-aloud their learning goal and to evaluate whether their online actions were effective in helping them achieve their learning goal. In the non-structured, control-group condition, students were just asked to think aloud, but not in the same structured way as the other condition. The results showed that students in the structured think-aloud condition showed significantly greater metacognitive and self-regulation skills, and in turn greater overall better task performance, compared to their control group peers (Ebner & Ehri, 2013). Although this study was conducted before the advent of AI, it is especially relevant in today’s AI world, because it suggests the power of structured think-aloud methodologies as a possible means for not only revealing, but also equipping students with the higher-order skill sets that are needed to successfully navigate, and critically evaluate AI generated content.
Rather than fearing or restricting students’ use of AI programs, educators instead should embrace AI as an educational opportunity to develop students’ higher order critical thinking, and their metacognitive, and self-regulated learning skills. By asking students to think aloud in a structured way that requires them to remember their learning goals, evaluate the effectiveness of their online actions, and critically evaluate AI generated content, educators can more accurately assess students’ development of higher order skills.
A Practical Way to Assess AI Use in the Classroom
For example, if students are assigned to write a research paper in a history class, evaluating only their final products may be problematic since it is hard to know whether they used generative AI to assist in writing their papers. Instead, it is important to shift the focus on not just assessing the final paper, but also the learning and thought processes that a student utilizes to research and write the paper. Employing a structured-think-aloud method can be useful in this regard.
In the case of the research paper assignment, a teacher could record, or ask students to record themselves, in real-time, verbalizing their thought processes and actions when using AI tools to research and write the paper. With the think-aloud protocol teachers will have greater insight into how students are using the AI tools, and to what extent they are critically evaluating their relevancy and accuracy.
In addition, going a step further and requiring students to think aloud in a structured way can facilitate students’ acquisition of self-regulation, metacognitive, and critical thinking skills. For example, instructors can ask students to continually think aloud their research goals and provide them with suggested prompts or criteria to evaluate the validity and relevance of AI generated content.
In summary, we as educators have an important role to play in ensuring the effective use of AI as an educational tool that can leverage deeper and more complex human abilities that cannot be generated by AI. Ironically, we must prepare students to “out smart” AI by focusing on developing the more difficult to measure abilities that Gardner wrote about in his original theory of MI, such as ethics, interpersonal, and intrapersonal skills. By encouraging students to communicate their thoughts and ideas in real time with structured think-aloud methodologies, we can assess and equip students with the skills they need to stay on task, critically evaluate AI content, and engage in ethical practices.
Rachel Ebner, PhD, is an educational psychologist who specializes in student learning, instruction, and assessment. She has a longstanding interest in researching, designing, and assessing multi-faceted ways to advance student learning both in and out of the classroom. Her research has focused on investigating ways to help students self-regulate their online learning. Dr. Ebner currently serves as director of student learning assessment and clinical assistant professor of psychology at Yeshiva University in New York City. She holds an M.A. in Developmental Psychology from Columbia University’s Teachers College and an Ed.M. in Risk & Prevention from Harvard Graduate School of Education. She earned her doctorate in Educational Psychology at the City University of New York’s Graduate Center, where she specialized in Learning, Development, and Instruction.
References
Ebner, R., & Ehri, L. (2013). Vocabulary learning on the Internet: Using a structured think-aloud procedure. Journal of Adolescent & Adult Literacy, 56 (6), 472-481, republished in Digital Literacies: An IRA Cross-Journal Virtual Issue (International Reading Association)
Desai, Hrishikesh (2025). What’s worth measuring? The future of assessment in the AI age. Unesco. Retrieved November 18, 2025 from https://www.unesco.org/en/articles/whats-worth-measuring-future-assessment-ai-age
Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. New York, NY: Basic Books.

