The campaign goes beyond a simple takedown demand. The groups want YouTube to label all AI-created videos clearly, ban them from children’s spaces on the platform and give parents an option to block such material altogether. Their warning reflects a wider anxiety among educators, psychologists and child-rights specialists that generative AI is beginning to flood the online environments where younger users spend time, often with content that is cheap to produce, visually striking and designed to capture attention rather than support development.
YouTube has pushed back by saying it already restricts some kinds of low-quality or deceptive material in YouTube Kids and has introduced disclosure rules for realistic altered or synthetic content. Under its policies, creators must flag content that is meaningfully altered or synthetically generated when it appears realistic, while YouTube says harmful synthetic media that breaks its guidelines can be removed. The company also says YouTube Kids does not allow deceptive, sensational or clickbait material, including low-quality children’s content that uses manipulation to win views.
That response, however, has not satisfied critics, who argue that disclosure labels are built for adults and older users, not for toddlers or primary-school children who may not understand what an AI label means. For campaigners, the issue is not only whether a video is fake, but whether it is developmentally appropriate. They argue that children can struggle to distinguish fantasy from reality even in conventional media, and that generative AI sharpens that problem by producing endless streams of polished, repetitive and sometimes bizarre imagery presented in familiar nursery-style formats.
The timing is awkward for Google because the company has also been positioning YouTube as a safer place for children and teens while expanding its AI ambitions across products. Mohan, in his annual letter for 2026, said YouTube was focused on building the best place for kids and teens and on safeguarding creativity as AI becomes more central to the platform. Yet Google’s own AI Futures Fund has backed Animaj, a studio that uses AI tools in children’s animation and whose programmes have drawn huge audiences on YouTube. That juxtaposition is likely to sharpen questions about where YouTube draws the line between high-volume AI-assisted production and content it considers low-value or harmful.
The row also lands as Google and other technology groups face broader scrutiny over the effects of digital platforms on younger users. A California case against Meta and YouTube over alleged harm from platform addiction has kept the issue of child safety in the spotlight, even though the legal claims centre on broader design choices rather than AI-made children’s videos alone. The political and legal climate has made it harder for platforms to treat youth-safety complaints as niche pressure campaigns, especially when they are tied to mental health, compulsive use and age-appropriate design.
For child-rights bodies, the concern reaches beyond one platform. UNICEF guidance published this year says AI systems affecting children should be designed with child rights in mind from the outset, warning that harms can emerge when commercial incentives move faster than safeguards. Other child-focused research has pointed to rising screen exposure, growing contact between younger children and AI-enabled tools, and the difficulty many families face in judging the quality of digital material marketed as educational or harmless entertainment.
