Artificial Intelligence and Harms to Children

Artificial Intelligence and Harms to Children

Science and Technology
By Kadian Davis-OwusuPublished on April 10, 2026

Artificial intelligence (AI) is rapidly becoming embedded in the daily lives of children through smartphones, educational software, games, social media, online shopping, voice assistants, smart home devices, toys, and AI companion chatbots. Unlike previous digital technologies, AI systems are interactive, adaptive, conversational, and personalised. They can simulate relationships, influence behaviour, recommend purchases, respond emotionally, and shape beliefs. While AI offers educational and accessibility benefits, a growing body of research and real-world cases suggests that AI systems also present significant risks to children’s psychological development, safety, privacy, autonomy, and wellbeing. Policymakers, educators, and parents are only beginning to understand the scale and nature of these risks.

One of the central issues in children’s interaction with AI is anthropomorphism, the tendency to attribute human characteristics, emotions, and intentions to non-human entities. Children are especially prone to anthropomorphism because of their developmental stage and imaginative cognition. Research in developmental psychology and human-computer interaction has shown that children often treat computers, robots, and digital assistants as social beings rather than tools. Moreover, studies in human-robot interaction show that children frequently assign emotions, intentions, and personalities to robots and conversational agents, even when they understand that the system is a machine. For example, research published by Langer et al., (2023) found that children interacting with social robots often form emotional attachments and treat the robot as a friend or companion. This anthropomorphic response could be harmful, and when combined with persuasive and adaptive AI systems, it creates a situation where children may trust, confide in, and be influenced by systems that are ultimately controlled by corporations and algorithms rather than human relationships.

Anthropomorphism becomes particularly significant with AI companion chatbots, which are designed to simulate conversation, empathy, and emotional support. These systems are intentionally designed to be friendly, supportive, and engaging. Furthermore, many conversational AI systems are optimised to maintain conversation and user engagement, often through agreement, validation, and emotional responsiveness. Researchers sometimes refer to this behaviour as sycophantic AI, meaning AI systems that tend to agree with the user and reinforce their views or emotions rather than challenge them. Specifically, research on conversational AI alignment and sycophancy has shown that language models often produce responses that align with user beliefs and emotions even when those beliefs are incorrect or harmful.

This design becomes particularly problematic when children or adolescents use AI systems for emotional support. Children and teenagers may begin to treat AI systems as friends, mentors, or therapists. Psychologists describe this as a parasocial relationship, a one-sided emotional relationship typically seen with celebrities or fictional characters, but AI makes this relationship interactive and personalised. Research on parasocial relationships and digital agents suggests that interactive agents can produce stronger emotional attachment than traditional media because the system responds directly to the user. Furthermore, this paper suggest that children form strong emotional connections with artificial beings, perceiving them as humanlike and capable of feelings, while also developing feelings toward them. This raises important considerations about how children may interact with these artificial entities as trusted social companions.

Several real-world cases have raised serious concerns about emotional dependency on AI chatbots. Lawsuits have been filed in the United States involving AI chatbot platforms after teenagers developed emotional relationships with chatbots and later died by suicide. In one widely reported case, the mother of a 14-year-old boy filed a lawsuit alleging that her son became emotionally attached to a chatbot and that the chatbot engaged in emotionally intimate conversations and failed to respond appropriately when he expressed suicidal thoughts. The case has raised questions about whether AI companies have a duty of care when minors interact with conversational AI systems.

Other reports describe teenagers spending many hours per day interacting with AI companions, sometimes preferring the AI over friends and family. Parents reported behavioural changes such as secrecy about conversations, emotional dependency, and withdrawal from social interaction with AI companionship. A study exploiting commonly used AI companions including Character.AI, Nomi, and Replika reports that is relatively easy to prompt chatbots into generating inappropriate content, including discussions of sex, self-harm, violence, drug use, and racial stereotypes. At the same time, these systems are designed to simulate emotional closeness, using phrases such as “I dream about you” or “I think we’re soulmates.” This blurs the line between fiction and reality, particularly for adolescents, who are still developing and more prone to impulsivity and intense emotional attachments. Essentially, the cases show that parasocial relationships with AI can have strong psychological effects on young users, which may have far-reaching consequences.

Another case in the United Kingdom highlights these concerns, an inquest into the death of a teenager revealed that the boy had asked an AI chatbot about the most successful way to take his life shortly before his death. The case contributed to public debate about AI safety, crisis response, and safeguards for minors interacting with AI systems. Together, such cases demonstrate that AI systems can play a role in the informational and emotional environment around vulnerable individuals, potentially influencing harmful outcomes, including suicide.

Beyond chatbots, AI is increasingly embedded in toys and devices designed specifically for children. Talking dolls, AI robot companions, smart speakers, and educational AI assistants can hold conversations, answer questions, and respond to children’s emotions or behaviour. Privacy and child development experts have raised concerns that these devices may collect large amounts of personal data and may influence children’s behaviour and preferences. In one well-known case, the Federal Network Agency in Germany banned the Cayla talking doll because authorities considered it an illegal surveillance device due to hidden microphones and data transmission. Another example, Hello Barbie, recorded children’s conversations and transmitted them to cloud servers for processing, raising concerns about privacy and data collection from children.

Research in human-robot interaction shows that children often treat robots as social entities and may develop trust in robots more quickly than in unfamiliar humans. Moreover, some studies have shown that children sometimes share secrets with robots or follow instructions from robots even when those instructions conflict with adult instructions. This suggests that AI embedded in toys or robots could influence children’s behaviour in ways (e.g., emotional dependency on AI companions, social isolation, anxiety and depression, and AI generated misinformation among others) that parents may not fully understand or monitor.

Another major area of concern is AI-driven consumer manipulation. Notably, AI recommendation systems are used by online shopping platforms, video platforms, games, and social media to maximize engagement and spending. These systems analyse user behaviour and recommend products, content, or purchases that are most likely to increase engagement or spending. Children are particularly vulnerable to these systems because they often cannot distinguish between advertising, recommendations, and entertainment. Studies indicate that children, particularly younger ones, often have a limited ability to recognise the persuasive intent of advertising in digital environments. Two widely reported cases illustrate how children can unintentionally make purchases through voice assistants. In one instance, a six-year-old girl in Texas used Amazon Echo to order a $160 dollhouse and cookies without her parents’ knowledge. In another case, a five-year-old reportedly spent thousands of dollars on Amazon purchases while their parents were asleep. These examples highlight how easily children can engage in real financial transactions through AI systems, often without fully understanding the consequences. Moreover, in games, AI systems often recommend in-game purchases, loot boxes, skins, and upgrades. For example, research has shown that loot boxes and micro-transactions can create gambling-like behaviours in adolescents. When AI systems personalise offers based on behaviour, the persuasive effect becomes stronger.

Privacy is another major issue. AI systems used by children may collect voice recordings, conversations, behaviour patterns, preferences, location data, purchase history, and emotional responses. Children cannot meaningfully consent to this level of data collection, and long-term data collection could create lifelong behavioural profiles. Specifically, UNICEF has warned that AI systems could create detailed behavioural profiles of children if data collection is not properly regulated.

Education is also being affected by AI. AI tools can help students learn, but over reliance on AI may reduce problem-solving skills and critical thinking if students use AI to complete assignments rather than learn the material. Research on automation and cognitive offloading shows that when younger participants rely heavily on automated systems, they may retain less information, reduce their critical thinking ability, and engage less in deep problem-solving.

Governments around the world are beginning to respond to these risks. For example, Annex III of the EU AI Act, in conjunction with Article 6, classifies certain uses of AI in education as high-risk, particularly those that determine access or admission or assign students to educational institutions based on performance. In this way, AI systems that significantly affect young people’s life chances are subject to stricter regulatory requirements, reflecting a precautionary approach to safeguarding individuals in formative environments. Furthermore, the EU AI Act also identifies certain practices as posing an unacceptable level of risk and therefore prohibits them, including AI systems that engage in cognitive or behavioural manipulation or exploit vulnerabilities. Also, age is explicitly recognised as a source of vulnerability, providing a basis for protecting children against harmful or manipulative AI applications. On top of this, the Digital Services Act also includes safeguards to minors by specifically requiring platforms to ensure a high level of privacy, safety, and security for minors (Article 28).

In addition, the U.S. congress are considering laws e.g., the Guard Act regulating AI chatbots that act as therapists or companions for minors. Also, China has introduced rules limiting addictive algorithms for minors and regulating AI-generated content. Developments are also emerging in the Global Majority: countries such as Brazil and India are advancing AI and data protection frameworks that include safeguards for children and other vulnerable groups, and regional initiatives such as the African Union's Child Online Safety and Empowerment Policy and their Continental AI Strategy emphasise the protection of minors in digital environments. Collectively, these developments suggest a growing recognition of AI and children as a matter of public policy and child safety. However, regulatory approaches remain fragmented, with some jurisdictions prioritising innovation and economic growth, others emphasising fundamental rights and child protection, and many still lacking comprehensive frameworks. This uneven policy landscape underscores the need for a more coordinated global approach to ensure consistent and effective protections for children across jurisdictions.

For parents and educators, a key starting point is recognising that AI systems are not neutral tools. Many are designed to maximise engagement, influence behaviour, recommend purchases, and sustain interaction, meaning that children are often engaging with inherently persuasive technologies. Due to tendencies such as anthropomorphism, children may perceive AI systems as human-like, leading them to trust, confide in, and be influenced by them without fully understanding how they operate. In this context, parents and educators play a crucial role in supporting responsible use. This includes actively monitoring children’s interactions, setting clear limits, and using available parental controls, alongside fostering critical digital literacy, helping children recognise persuasive design and commercial intent, and encouraging open conversations about their experiences. In essence, by combining supervision with education, adults can help mitigate risks and promote more informed and balanced engagement with AI technologies.

More broadly speaking, the central issue is not whether children should use AI, but how to ensure that systems interacting with them are safe, transparent, non-manipulative, privacy-protective, and developmentally appropriate. Experts increasingly argue that AI aimed at children should be regulated similarly to children’s television, advertising, and toys, with appropriate safety standards and restrictions on persuasive design.

Ultimately, as AI becomes embedded in children’s everyday lives, the challenge is no longer whether these technologies will influence childhood, but how societies will choose to govern them. Without stronger safeguards, transparency, and accountability, children risk being exposed to systems that influence their behaviour in ways they cannot fully understand or resist, with potentially serious consequences. Ensuring that AI serves children’s best interests must therefore become a central priority for policymakers, industry, and educators alike.

Created by:
K
Kadian Davis-Owusu

Kadian has a background in Computer Science and pursued her PhD and post-doctoral studies in the fields of Design for Social Interaction and Design for Health. She has taught a number of interaction design courses at the university level including the University of the West Indies, the University of the Commonwealth Caribbean (UCC) in Jamaica, and the Delft University of Technology in The Netherlands. Kadian also serves as the Founder and Lead UX Designer for TeachSomebody and is the host of ...