Meta announced new parental controls for teen interactions with AI characters on Instagram, following criticism over provocative chatbot behaviour.
The tools, set to launch in early 2026 in the US, UK, Canada, and Australia, allow parents to block specific AI characters and monitor broad conversation topics without disabling Meta’s AI assistant, which retains age-appropriate settings.
Meta is adding more parental controls for teen AI use https://t.co/sPlc9WNJOe
— The Verge (@verge) October 17, 2025
Instagram head Adam Mosseri and Chief AI Officer Alexandr Wang detailed the features in a blog post. The controls build on existing teen account protections, using AI signals to identify and safeguard suspected minors, even if they claim to be adults. Meta ensures its AI characters avoid inappropriate discussions on self-harm, suicide, or eating disorders.
Meta is rolling out new parental controls for teens interacting with the company's AI characters, but it's too little, too late.
Link:https://t.co/TRaBSFf7ut pic.twitter.com/rpgAVhELI4
— Lifehacker (@lifehacker) October 17, 2025
US regulators have intensified oversight of AI firms after a Reuters report in August highlighted Meta’s lax AI rules for minors. A September report noted ineffective Instagram safety features. OpenAI faced a lawsuit over chatbot-related teen suicide, prompting similar controls.
Meta’s move addresses growing concerns about AI safety for teens, balancing innovation with responsibility. The controls aim to rebuild trust amid regulatory pressure.