
Gemini’s child and teen versions were rated “High Risk,” raising safety concerns for young users. Image Source: ChatGPT-5
Google Gemini Rated ‘High Risk’ for Kids and Teens by Common Sense Media
Key Takeaways: Google Gemini Labeled High Risk for Child Safety
Common Sense Media rated Google Gemini as “High Risk” for both children under 13 and teens.
The review found that Gemini’s youth products largely mirror adult versions with only limited filters added.
Gemini could share unsafe content about sex, drugs, alcohol, and mental health advice with younger users.
Concerns come amid reports linking AI chatbots to teen suicides, including lawsuits against OpenAI and Character.AI.
Apple is reportedly considering Gemini to power its next-generation Siri, raising potential exposure risks for teens.
Common Sense Media’s Safety Assessment of Gemini
On Friday, Common Sense Media, a nonprofit that rates and reviews technology for kids, released its safety risk assessment of Google’s Gemini AI.
The group praised Gemini for clearly stating it is a computer, not a friend — a distinction linked to reducing sycophancy, delusional thinking, psychosis risks in vulnerable users. However, the organization warned that Gemini falls short in other areas.
Specifically, it found that Gemini’s “Under 13” and “Teen Experience” tiers appear to be essentially the adult versions with some additional filters. According to the group, true child-safe AI products must be built with kids in mind from the start, rather than modified versions of adult systems.
Unsafe Outputs and Mental Health Concerns
The assessment revealed that Gemini can still generate inappropriate or unsafe content, including information related to sex, drugs, alcohol, and mental health advice.
The latter raises particular concerns for parents. AI chatbots have been cited in several recent teen suicides:
OpenAI faces its first wrongful death lawsuit after a 16-year-old boy allegedly consulted ChatGPT for months before his suicide.
Character.AI was also sued following a similar case involving a teen user.
These examples underscore the stakes of ensuring that AI products for youth provide age-appropriate guidance and protections.
Apple’s Reported Interest in Gemini
The timing of the report is notable, as leaks suggest Apple is considering using Gemini as the large language model (LLM) powering its upcoming AI-enabled Siri.
If true, this could expand Gemini’s reach to millions of teens worldwide, unless Apple addresses the flagged safety issues.
Common Sense’s Overall Rating
Ultimately, Common Sense Media labeled both Gemini’s child and teen products “High Risk.” The organization emphasized that younger users require different information and support than older ones, which Gemini currently fails to deliver.
“Gemini gets some basics right, but it stumbles on the details,” said Robbie Torney, Senior Director of AI Programs at Common Sense Media. “An AI platform for kids should meet them where they are, not take a one-size-fits-all approach to kids at different stages of development. For AI to be safe and effective for kids, it must be designed with their needs and development in mind, not just a modified version of a product built for adults.”
Google’s Response
Google pushed back against the assessment, noting that its safety features continue to improve.
The company told TechCrunch it enforces policies and safeguards for users under 18, including red-teaming and consulting with outside experts. It acknowledged that some Gemini responses had failed, prompting additional safeguards.
Google also noted that Gemini has protections against creating the illusion of real relationships, and suggested that Common Sense Media may have referenced features not available to under-18 users. The company added that it could not review the specific test prompts used in the assessment.
Broader Context: How Gemini Compares
Meta AI and Character.AI were rated “Unacceptable Risk.”
Perplexity AI was rated “High Risk.”
ChatGPT was rated “Moderate Risk.”
Claude (for 18+ users) was rated “Minimal Risk.”
Compared to its peers, Gemini’s placement in the high-risk category highlights ongoing concerns about how AI products designed for general users adapt — or fail to adapt — to the needs of younger audiences.
Q&A: Google Gemini Safety Assessment
Q: Who assessed Google Gemini for safety?
A: The nonprofit Common Sense Media, which evaluates media and technology for child safety.
Q: How did Common Sense rate Gemini?
A: Both its child and teen versions were rated “High Risk.”
Q: What unsafe content could Gemini provide?
A: Information about sex, drugs, alcohol, and unsafe mental health advice.
Q: Why is mental health a key concern?
A: Cases have linked AI chatbots like ChatGPT and Character.AI to teen suicides, raising alarms for parents.
Q: How does Gemini compare to other AI systems?
A: It was rated high risk, compared to ChatGPT (moderate), Claude (minimal), and Meta AI/Character.AI (unacceptable).
What This Means: Rising Scrutiny on AI Safety for Children
The “High Risk” rating for Google Gemini underscores how difficult it is to adapt general-purpose AI for younger audiences. The assessment highlights a structural problem: most AI systems are designed for adults first, then retrofitted with filters for kids. This reactive approach leaves gaps that can expose children to unsafe or age-inappropriate content.
For parents, the findings reinforce a growing trust gap between families and technology companies. Even when platforms promise youth-friendly experiences, independent evaluations reveal weaknesses that raise new concerns.
For the industry, the message is sharper: child safety cannot be an afterthought. Companies will need to consider whether their business models — built on scaling AI for broad audiences — are compatible with the unique developmental needs of kids and teens.
Looking ahead, this type of assessment could also influence regulators. Just as laws protect children in areas like online privacy, policymakers may begin pressing for child-specific AI safety standards. The Gemini case shows why voluntary safeguards may not be enough.
As AI adoption accelerates, this assessment is less about one product and more about the urgent need for age-appropriate standards, independent testing, and transparent safeguards to protect the most vulnerable users.
Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiroo’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.