Common sense mediaIt is a child safety-centric nonprofit that provides ratings and media and technology reviews and released a risk assessment of Google’s Gemini AI products on Friday. While the organization found that Google’s AI clearly tells children that it’s a computer, not a friend – related to it is help drive Delusion thinking and Mental illness Among emotionally vulnerable people – it does show that there is room for improvement in several other aspects.
It is worth noting that common sense says that the Gemini’s “under 13” and “teen experience” layers appear to be both adult versions of Gemini worn under the hood, with only a few other safety features added to the top. The organization believes that in order for AI products to truly make children safer, child safety should be considered from scratch.
For example, its analysis found that Gemini can still share “inappropriate and unsafe” materials with children, who may not be ready, including information related to sex, drugs, alcohol and other unsafe mental health advice.
The latter may be particularly concerned with parents, as AI has been reported to have played a role in suicides in some teenagers in recent months. Openai is facing the first Wrong death lawsuit A 16-year-old boy died of suicide after allegedly consulting with Chatgpt for several months and successfully bypassed the chatbot’s security guardrail. Previously an AI companion manufacturer Role. EA was also sued In suicide of teen users.
Furthermore, the analysis was conducted with news leaks, indicating Apple is considering Gemini As an LLM (large language model), it will help launch its upcoming AI-Siri next year. Unless Apple mitigates security in some way, this could put more teenagers at risk.
Common sense also says that Gemini products for children and teenagers ignore how younger users need different guidance and information. As a result, both are labeled as “high risk” in the overall rating, despite adding filters to security.
“The basics of Gemini are correct, but it stumbles on the details,” said Robbie Torney, senior director of common sense media for AI program. statement About the new assessment. Tony added: “A child’s AI platform should meet with them where they are, rather than taking a suitable approach to the child at different stages of development. In order for AI to be safe and effective for children, their needs and development must be taken into account, not just a modified version of the product built for adults,” Torney added.
TechCrunch Events
San Francisco
|
October 27-29, 2025
Google rejected the assessment, noting that its security features are improving.
The branch told TechCrunch that it provides specific policies and safeguards for users under 18 to help prevent harmful output and conduct a red team with external experts and consult with improvements to its protections. However, it also acknowledged that some of Gemini’s answers did not work as expected, and therefore added additional safeguards to address these issues.
The company notes (as common sense also points out), it does have safeguards to prevent its models from having conversations that may give real relationships. Additionally, Google suggests that Common Sense’s report seems to have referenced features that users under 18 cannot use, but it has no access to issues that organizations use in their testing.
Common Sense Media has already executed other Evaluate AI services, including from Openai,,,,, Puzzled,,,,, Claude,,,,, meta ai and More. It finds meta ai and Role It is “unacceptable” – which means the risk is serious, not only high. Confusion is considered high risk, Chatgpt is marked as “medium”, and Claude (for users 18 years and older) is considered the least risk.