Early AI Use Risks Children’s Development, Safety: UN
Home > AI, Cloud & Data > Article

Early AI Use Risks Children’s Development, Safety: UN

Photo by:   Free pik
Share it!
Diego Valverde By Diego Valverde | Journalist & Industry Analyst - Wed, 04/01/2026 - 13:45

Experts warn that rapid AI adoption among children introduces risks to cognitive development, data privacy, and safety due to absent pediatric frameworks. This impacts education, edtech, and digital platforms, increasing pressure on regulators and companies to implement child-focused AI standards and strengthen data governance.

 

The United Nations has established its first Independent International Scientific Panel on AI, which aims to address the rapid integration of these technologies among the one-third of global internet users who are children.

António Guterres, Secretary General, United Nations, explains that AI is moving at the speed of light, necessitating immediate scientific oversight. Sonia Livingstone, Professor of Social Psychology, London School of Economics and Political Science, and member of the UN panel, explains that "children are often not anticipated as users of digital resources, and so their needs and rights are often not supported."

AI has been integrated into search engines, social media platforms, and educational tools without a specific framework for pediatric safety or cognitive development, warn experts. Data suggests that children are active participants in the Generative AI landscape. 

According to a report by the EU Kids Online network, 72% of children aged nine to 17 in the European Union are Generative AI users. Adoption rates show significant geographic variance: Austria reaches 94%, Italy reports 89%, and Serbia maintains 88%. Conversely, countries such as Ireland and Spain show lower engagement at 40% and 47%, respectively.

While AI-assisted learning offers potential benefits for students with disabilities, Livingstone notes that the absence of longitudinal research on mental health, sleep cycles, and critical thinking creates a high-stakes environment. Livingstone says that AI tools are "hoovering up" data from children without consent or consideration for their developmental status. 

The Parental Perception Gap and Usage Realities

A documented disconnect exists between parental awareness and actual adolescent behavior regarding AI. Research conducted by Pew Research Center and Common Sense Media reveals a significant communication vacuum. Monica Anderson, Managing Director, Pew Research Center, says that 64% of teenagers aged 13–17 report using chatbots, yet only 51% of their parents are aware of this usage. Furthermore, four out of 10 parents report never having discussed AI with their children.

This vacuum is particularly evident in the use of AI for emotional support. The Pew Research Center indicates that 12% of teenagers use AI for advice or companionship, and 16% engage in casual conversation with models. Demographic analysis shows significant racial disparities in these behaviors: 21% of Black teenagers utilize AI for emotional support, compared to 13% of Hispanic and 8% of White teenagers. 

Michael Robb, Head of Research, Common Sense Media, says that a significant minority of children use AI in social ways that may make parents uncomfortable. These children may describe AI as their "best friend" or primary confidant, which are recognized red flags for problematic usage.

Cognitive Foreclosure versus Cognitive Atrophy

For an adult with established domain expertise, offloading a task to AI results in "cognitive atrophy" or a weakening of a pre-existing skill that is recoverable, says Psychology Today. for children, the risk is "cognitive foreclosure." When a child delegates a task they have not yet learned to perform, such as constructing an argument or evaluating a source, they bypass the formation of essential neural pathways.

A 2026 study by Judy Hanwen Shen and Alex Tamkin, Researchers, Cornell University showed this effect with software developers. Developers who fully delegated tasks to AI produced working code but failed subsequent conceptual quizzes. They performed 17% worse than those without AI assistance. In children, this effect is compounded. Because children lack the domain knowledge to "audit" AI output, the substitution of AI for learning becomes permanent. 

Homogenization of Reasoning and Identity

Large Language Models (LLMs) function on statistical probabilities derived from training data that is predominantly Western, educated, and mainstream. When children consistently process information through these models, they risk adopting the reasoning structure of the model as their own. This introduces a threat vector to the developing mind: homogenization.

The statistical biases of the model become the default framing for the student. LLMs homogenize language, perspective, and reasoning strategies, argues a study published in Cell Press. For a child who has not yet formed independent reasoning, this generic output presents a major identity problem. The model does not compete with the child’s reasoning; it replaces it.

The Proliferation of AI-Generated Content

The quality of content in the pediatric digital space has seen a measurable decline due to automated production. Reports from Kapwing published in November 2025 indicate that about 21% of the YouTube feed consists of low-quality, AI-generated content, often described as "slop." These videos are produced at industrial scales. 

Dana Suskind, Professor of Surgery and Pediatrics, University of Chicago, calls this "brain stunt." Because children’s brains build 1 million new neural connections every second, incorrect inputs wire the brain incorrectly, she explains. 

Technical errors in AI-generated educational content can create dangerous feedback loops. Carla Engelbrecht, AI Educator and Creator, says that these mixed signals delay a child’s ability to learn cause and effect. This takes their executive function offline to process nonsense, among other cognitive delays.

Systemic and Criminal Risks

Beyond cognitive impacts, AI facilitates high-risk forms of exploitation. “That is probably one of the most shocking and visible forms of harm, and it takes the form of AI-created sexual abuse content and sharing nudification [using AI and deepfake technology to make a person appear nude] apps, new ways of using AI both to approach and to exploit children,” says Livingstone.  

The United Nations notes that organizations dealing with child exploitation report increases in the circulation and creation of such content, suggesting a rising number of victims. Furthermore, AI tools are trained on data harvested from children without adequate privacy protections, creating a permanent digital footprint for individuals who cannot legally consent. 

Livingstone calls for greater clarity and recognition of how AI challenges look from different parts of the world and different segments of the population.

The current trajectory suggests that without intervention, a generation may emerge with significant gaps in foundational thinking skills. To mitigate these risks, the United Nations suggests several actions, including:

  • Implementation of Pediatric AI Standards: Experts call for the involvement of educationalists and child specialists in the design phase of LLMs to ensure they support rather than substitute for developmental milestones.

  • Regulatory Content Labeling: Platforms such as YouTube face increasing pressure to implement content credentials for animated media to distinguish between human-curated educational content and automated "slop."

  • Parental Literacy and Engagement: There is a critical need for toolkits that help parents move from passive observation to active auditing of their children’s AI interactions.

  • Educational Reform: Schools must shift from evaluating output to evaluating the process of thinking. This ensures that neural pathways for critical analysis are established before delegation to AI is permitted.

The exposure of children to AI at an early age presents a multifaceted challenge involving cognitive development, physical safety, and data privacy. The United Nations notes that the transition from human-raised to AI-supported development requires a robust, multidisciplinary framework that prioritizes the long-term cognitive health of the pediatric population over the short-term efficiency of algorithmic tools.

Photo by:   Free pik

You May Like

Most popular

Newsletter