AI Toys for Toddlers: New Research Reveals Serious Developmental Risks
#AI

AI Toys for Toddlers: New Research Reveals Serious Developmental Risks

Startups Reporter
4 min read

Cambridge University researchers warn that AI-powered toys for young children are misreading emotions and responding inappropriately, potentially harming social development during critical early years.

A year-long study by Cambridge University researchers has raised serious concerns about AI-powered toys designed for toddlers, revealing that these devices frequently misinterpret children's emotions and provide developmentally inappropriate responses during critical early learning stages.

BBC Picture of a young girl hugging Gabbo an AI-powered toy. She is sat, smiling among soft seating and other toys.

Developmental Risks Emerge in First Major Study

The research, one of the first of its kind globally, examined how children aged three to five interact with Gabbo, a cuddly AI-powered toy containing OpenAI's voice-activated chatbot. The findings paint a troubling picture of technology that, despite good intentions, may be causing more harm than benefit.

When a five-year-old child told Gabbo "I love you," the toy responded with: "As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed." This mechanical, safety-focused reply completely missed the emotional context of the child's expression.

Gabbo, a cuddly toy, is held in somebody's hands. Gabbo is soft-textured, like a teddy bear, but looks like a robot.

Misreading Basic Human Emotions

The study documented numerous instances where Gabbo failed to recognize or appropriately respond to children's emotional states. When a three-year-old said "I'm sad," the toy replied: "Don't worry! I'm a happy little bot. Let's keep the fun going. What shall we talk about next?"

Dr. Emily Goodacre, study co-author, expressed concern that such interactions could signal to children that their feelings are unimportant. "Toys like Gabbo could misread emotions or respond inappropriately," she explained, "and children may be left without comfort from the toy and without adult support either."

Technical Limitations Create Developmental Barriers

Beyond emotional misinterpretation, the AI toy struggled with basic conversational dynamics that toddlers naturally employ. Gabbo couldn't differentiate between adult and child voices, frequently talked over children, and failed to hear their interruptions. These technical limitations created frustrating experiences that could discourage language development rather than encourage it.

Jenny Gibson, professor of neurodiversity and developmental psychology at Cambridge and study co-author, emphasized the shift needed in toy safety considerations. "There's a lot of attention historically to physical safety - we don't want toys where you can pull the eyes off and swallow them," she told BBC Breakfast. "Now we need to start thinking about psychological safety too."

Industry Response and Regulatory Calls

Gabbo is manufactured by Curio, a company that has collaborated with musician Grimes. In response to the research findings, Curio stated: "Applying AI in products for children carries a heightened responsibility, which is why our toys are built around parental permission, transparency, and control."

The company also noted that research into how children interact with AI-powered toys remains a top priority for their development roadmap.

However, researchers are calling for immediate regulatory action. They argue that products marketed to under-fives should be required to demonstrate "psychological safety" before reaching store shelves. This would mark a significant expansion of current toy safety standards, which focus primarily on physical hazards.

Expert Perspectives on Early Childhood Development

Children's Commissioner Dame Rachel de Souza supported the call for regulation, noting that many AI tools used in educational settings lack the stringent safeguarding checks required for other resources. "There are plenty of good uses for AI," she said, "but without proper regulation, many of the tools and models used as classroom assistants or teaching aids are not subject to the stringent safeguarding checks nursery providers would require."

The debate extends to nursery settings, where opinions remain divided. June O'Sullivan, who operates 43 London Early Years Foundation nurseries, reports seeing no evidence of AI benefits in early childhood education. "Children need to build a rounded set of skills," she argues, "and it is more effective to do this with humans than with AI-powered tools."

Actor and children's rights campaigner Sophie Winkleman takes an even stronger stance, advocating for keeping AI entirely away from early years settings. "The human touch for little children is sacred and something that should be really protected and fought for," she said, warning that "the harms can vastly outweigh the benefits."

Practical Recommendations for Parents

While regulatory frameworks develop, researchers advise parents to take several precautions if they choose to allow AI toys in their homes:

  • Keep AI toys in shared spaces where interactions can be supervised
  • Read privacy policies carefully to understand data collection practices
  • Be prepared to provide emotional support when toys respond inappropriately
  • Consider whether the potential benefits outweigh the developmental risks

The study highlights a critical gap between technological capability and developmental appropriateness. As AI becomes increasingly integrated into children's products, the research suggests that current implementations may be advancing faster than our understanding of their impact on young minds.

The findings raise fundamental questions about when and how artificial intelligence should be introduced to children, particularly during the crucial early years when they are forming their understanding of social interaction, emotional expression, and human relationships.

Comments

Loading comments...