- AiNews.com
- Posts
- Kids & Generative AI: New Insights on Use, Risks & Well‐Being
Kids & Generative AI: New Insights on Use, Risks & Well‐Being

Image Source: ChatGPT-4o
Kids & Generative AI: New Insights on Use, Risks & Well‑Being
Generative AI tools are rapidly becoming part of children’s everyday experiences, even before many adults have fully grasped their reach. According to a new report from the Alan Turing Institute, nearly one in four UK children aged 8 to 12 have already used tools like ChatGPT—well before entering secondary school. The findings come from the WP1 report of the “Understanding the Impacts of Generative AI Use on Children” project, developed in partnership with UNICEF and LEGO Foundation.
While usage rises with age, the scale is notable even among the youngest:
14% of 8-year-olds have used generative AI.
That number jumps to 26% by age 12.
58% of AI-using children named ChatGPT as their main tool.
Children are using AI for a range of reasons, with clear patterns emerging:
43% say they use it for creativity and digital play.
43% turn to AI for learning and answering questions.
40% use it for entertainment purposes.
12% of 12-year-olds rely on it for help with homework.
For some children, particularly those with additional learning needs, generative AI plays an even deeper role. These children are more likely to use it for social connection (37%), to seek advice (39%), and even for companionship (37%). In these cases, AI is not just a tool, but a support system—raising complex questions about emotional reliance and digital relationships.
Widespread Benefits—and Notable Gaps
The report documents clear benefits of generative AI use among children. Many kids describe it as “fun,” “creative,” and “helpful.” It encourages experimentation, supports learning in engaging ways, and often prompts discussion among peers. Encouragingly, 76% of parents expressed a generally positive view of their children using these tools.
But the findings also reveal stark disparities in access and experience:
Children in higher-income households (social grades A–C1) are significantly more likely to have tried generative AI—61% compared to 44% in lower-income homes.
A large divide also appears between school types: 52% of private school students reported using generative AI, while only 18% of state school students had done so.
These gaps suggest that without deliberate intervention, generative AI could reinforce existing educational inequalities. Children with more resources—not just financially, but in terms of digital access and adult guidance—are already pulling ahead in their comfort and fluency with AI tools.
Real Concerns from Kids, Parents & Teachers
While enthusiasm for generative AI is strong, concerns are just as widespread. The report highlights a growing unease among parents, teachers, and children themselves about how these tools are being used—and misused.
Top concerns from parents include:
82% worry about their children encountering inappropriate or harmful content.
77% are concerned about children receiving false or misleading information.
76% believe AI could negatively affect children’s critical thinking skills, making them overly reliant on instant answers.
Teachers share many of the same worries.
Over half of teachers report that students have submitted AI-generated work as their own, making it harder to assess learning and academic integrity.
Others describe a lack of preparedness and resources to guide students in using AI responsibly or productively.
Children themselves raised more nuanced concerns in qualitative workshops. These workshops introduced key topics—such as fairness, trust, and environmental impact—before inviting children to share their own views and reactions. From these guided discussions, several themes emerged:
Bias and Representation: Some children noticed that AI-generated images didn’t reflect people who looked like them. Children of color, in particular, shared feelings of frustration when tools failed to produce diverse or accurate representations.
Environmental Impact: After learning that generative AI uses significant computing power and energy, many children expressed concern about the technology’s environmental footprint. This was especially important to children who already cared about climate change or sustainability.
Trust and Misinformation: Children reported feeling uneasy when they realized AI could “make things up” or provide information that wasn’t true. They said it made it harder to know what was real and whom to trust—especially when AI responses looked authoritative but weren’t accurate.
These are not passive users; they are actively grappling with the implications of the tools they’re using.
Children Want Voice—and Rights—in AI Design
In workshops conducted with Children’s Parliament as part of the WP2 strand of research, children aged 9 to 11 made clear that they want more control over how AI is designed, used, and governed. Their priorities include:
Ensuring children’s rights are respected in AI development.
Designing tools that are inclusive and free from bias.
Involving children in decision-making processes around how AI is integrated into their lives.
One child summed up the broader sentiment with striking clarity: “AI will be in all our lives, so we need to know what it means and how it works before we grow up.”
Rather than shielding children from AI, the report suggests, the priority should be to equip them with the tools, context, and agency to navigate it.
Looking Ahead
The report calls for a coordinated approach to ensure that generative AI benefits all children—not just the most digitally privileged. Key recommendations include:
Embedding AI literacy in school curricula, so children understand how these systems work and what their limitations are.
Designing AI tools with children in mind, including safeguards around privacy, accuracy, and bias.
Involving children directly in policymaking and product development, recognizing their expertise in their own experiences.
Providing support for teachers, including resources, training, and guidance on how to integrate AI in ways that enhance learning.
Addressing digital inequality, to ensure all children can access and safely use AI tools—not just those in wealthier households or private schools.
These steps are not just about safety—they are about opportunity. Generative AI is already influencing how children learn, socialize, and express themselves. The question is whether those experiences will be empowering—or uneven, exploitative, or exclusionary.
What This Means: Centering Children in AI’s Future
This report offers one of the most comprehensive looks to date at how children are engaging with generative AI. Its findings show that children are not just passive consumers of technology—they’re early adopters, critics, and creative users.
What makes this research so urgent is that it challenges the assumption that AI is something children will confront in the future. They’re already using it—and in many cases, shaping how it works for them.
The risks are real: misinformation, overreliance, bias, and unequal access all threaten to deepen existing divides or diminish learning. But the potential is just as significant. Generative AI can enhance creativity, broaden understanding, and even provide meaningful forms of support—especially for children with fewer other resources.
What’s needed now is a framework that respects children as both users and stakeholders in AI. That means embedding ethics, rights, and representation into the technologies themselves—and doing so with children, not just for them.
Generative AI is here. How it affects the next generation will depend on whether we build systems that include them from the start.
Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.