Common Sense Media recently launched the Youth AI Safety Institute, an independent research and testing organization dedicated to ensuring the AI that children use is safe and developmentally appropriate.
More than half of American teenagers now regularly chat with AI companions. Nearly a third say conversations with AI are as satisfying as—or more satisfying than—talking with real-life friends. Over half are turning to AI tools for homework help.
The Institute will bring significant new resources, technical expertise, and global reach to close the growing gap between AI use and youth safety. It will establish safety standards, build open-source evaluations that AI developers can run against their models, independently test AI products, and publish the results to provide transparency and accountability.
“AI is reshaping childhood and adolescence, yet we are making critical decisions about children’s futures without the evidence we need to ensure it’s safe and in their interest,” said Common Sense Media Founder and CEO James P. Steyer. “The need for transparent AI safety standards and independent testing is more urgent than ever.”
The Institute’s approach is modelled on independent crash-test ratings that show consumers whether cars are safe, set a clear bar for automakers to meet, and contribute to improvements in vehicle design. The Youth AI Safety Institute will apply the same crash-test model to AI: testing the products children use most, showing parents the results, and holding industry accountable to meet a high standard of youth safety.
The Institute’s work will extend beyond testing. It will research youth behaviour and lead public education campaigns to help families navigate AI in their lives. It will also study the impact of AI on youth well-being and social, emotional, and cognitive development.
“Making the AI that kids use safer is a collective challenge,” said Ellen Pack, Co-CEO of Common Sense Media. “It will take researchers, policymakers, and industry all pulling in the same direction. The Youth AI Safety Institute’s role is to set a high bar: rigorous standards, independent testing, and transparent results that raise the bar for everyone.”
The Institute will operate under Common Sense Media, the nation’s leading kids and tech non-profit with a 23-year track record of protecting and preparing families for the digital age. Philanthropic funders include Lee Ainslie of Maverick Capital, Jim Coulter of TPG, John H. N. Fisher of Draper Fisher Jurvetson, Paul Tudor Jones of Tudor Investment Corp., Gene Sykes of Goldman Sachs, and the Walton Family Foundation. Industry-related funders include Anthropic, the OpenAI Foundation, and Pinterest. Additional funders will be announced in the future.
The Institute is solely responsible for its standards, research, and evaluations, and maintains complete editorial independence over published results. Common Sense Media has previously published rigorous assessments that identified risks for teens with leading AI chatbots, including ChatGPT, Claude, Gemini, and Meta AI.
“Building safe AI for the next generation requires thoughtful collaboration, careful research, and safeguards grounded in real-world expertise. Like so many parents, I think about the impact this technology will have on young people and how important it is that we get it right,” said Daniela Amodei, President and Co-founder of Anthropic.
“AI holds enormous promise for young people, opening up new ways to learn, create, and explore their interests,” said Wojciech Zaremba, Head of AI Resilience at the OpenAI Foundation. “As these tools become part of everyday life, it’s important that they’re designed to be safe, trustworthy, and appropriate for different stages of development. That’s why independent evaluation and public accountability matter.”
The Institute is working alongside a growing network of strategy, research, and technical evaluators, including established partnerships with Transluce, Humane Intelligence, and Stanford Medicine’s Brainstorm Lab for Mental Health Innovation. It welcomes collaboration with other leading experts and AI safety evaluators across the globe.
“We’re in the deep end of the pool with AI now,” said Jonathan L. Zittrain, Director and Co-Founder of the Berkman Klein Center for Internet and Society at Harvard University. “Some, like the frontier labs and early adopters, have jumped in; others have felt tugged in or pushed—or simply felt water rising around them. This is a vital and urgent initiative to help all of us get an independent and more thorough sense not only of how models work in a beaker, but also how they are impacting the young people who use them.”
The Institute will be guided by a Board of Advisors composed of distinguished experts in AI, youth development, child safety, mental health, and education, with a conflict-of-interest policy that excludes current employees or affiliates of funders or partner organizations.
Dr. Vivek Murthy, former Surgeon General of the United States and a member of Common Sense Media’s Board of Directors, will be the Board’s liaison to the Institute’s Board of Advisors. “We are at great risk of making the same mistakes with AI that we made with social media: subjecting children to new technologies without adequate safety guardrails and thereby causing harm to countless lives,” said Murthy.
“For all its potential uses, AI—and AI chatbots in particular—has the potential to damage the mental health, social development, and well-being of young people, too often with tragic outcomes,” Murthy added. “We urgently need policies and institutions that will demand transparency, allow for independent safety evaluations, and enforce accountability. The well-being of the next generation is at stake.”
Learn more about the Youth AI Safety Institute here: www.commonsense.org/institute.
About Common Sense Media
Common Sense Media is the leading non-profit organization dedicated to improving the lives of kids and families by providing the research-backed information, education, and independent voice they need to thrive in the age of apps, algorithms, and AI. The organization rates, educates, and advocates to protect and prepare kids online. Its ratings, research, and resources reach more than 150 million users globally, over 1.5 million educators, and more than 100,000 schools worldwide every year.


