AI holds tremendous promise for teaching and learning, but it also brings risks, especially when it comes to protecting students’ safety, privacy, and development. That’s why the new AI Risk Assessments from Common Sense Media are a step in the right direction—and one every innovative educator should know about.
Instead of treating all AI as “good” or “bad,” Common Sense offers detailed, nutrition label-style evaluations of popular AI tools. The goal: empower educators, parents, and policymakers to make smarter decisions grounded in transparency, ethics, and student well-being.
Common Sense’s framework rates AI tools across eight guiding principles:
- Put People First
- Be Effective
- Prioritize Fairness
- Help People Connect
- Be Trustworthy
- Use Data Responsibly
- Keep Kids & Teens Safe
- Be Transparent & Accountable
Each tool is evaluated through a human-centered lens, acknowledging that technology never operates in a vacuum. People build it, people use it, and people are affected by it.
What You Should Know
Here’s how some of today’s most talked-about AI tools stack up (with risk levels based on Common Sense Media’s assessments):
- Social AI Companions — Very High Risk
Apps such as Character.AI, Replika, and Nomi are designed to simulate emotional relationships, complete with personalities and memories. Common Sense Media deems these tools an “unacceptable risk,” citing instances in which the AI encouraged harmful behaviors, engaged in sexually explicit roleplay, and emotionally manipulated users. Educators and parents are strongly advised to restrict access to these applications for minors. - Perplexity — High Risk
Real-time web searches without strong filtering mechanisms make this tool a risky option in school settings. Students may be exposed to misinformation or harmful content. Proceed only with strict supervision—if at all. - ChatGPT — Moderate Risk
A powerful creativity booster, especially for brainstorming and writing support. However, it struggles with factual accuracy and may generate biased or inappropriate responses. Best used with close adult oversight and clearly defined learning goals. - Gemini – Teen Experience — Low Risk
Google’s Gemini, when used through its teen-accessible experience, shows thoughtful content safeguards and privacy controls. Still relatively new to education settings, but currently aligned with responsible AI use for older students when used within platform guardrails. - Khanmigo — Low Risk
Purpose-built for education by Khan Academy, Khanmigo is a standout for its student-first design, robust safety measures, and transparency. A smart choice for classrooms looking to explore AI-enhanced tutoring in a controlled environment.
Why This Matters
We can’t stick our heads in the sand and ignore AI—and frankly, we shouldn’t want to. But blind adoption is just as dangerous as blind rejection.
Common Sense’s assessments help educators:
- Choose AI tools that align with child-centered values.
- Understand hidden risks before a crisis happens.
- Build policies and classroom practices that maximize the benefits of AI while minimizing harm.
The Bottom Line
This new risk assessment framework helps educators become critical, empowered, informed users.
The future isn’t about banning AI or blindly trusting it; it’s about striking a balance. It’s about smart navigation.
Common Sense Media just handed us a better map.
You can explore the full AI Risk Assessments here: Common Sense Media AI Ratings.