Meta will launch a system on Instagram to alert parents when teenagers repeatedly search for suicide or self-harm content. Alerts will trigger after multiple searches in a short period. Meta connects the feature to its Teen Account supervision tools. The company frames the move as a key step in protecting young users online.
Previously, Instagram blocked harmful searches and directed teens to external support services. Meta now adds direct parental notifications for extra oversight. Teen Accounts in the UK, US, Australia, and Canada will start receiving alerts next week. The company plans to expand the system to more countries over the coming months.
Molly Rose Foundation Warns of Risks
The Molly Rose Foundation has criticized the alert system. Chief executive Andy Burrows says it could have unintended consequences. He warns automatic notifications may create panic instead of helping families.
The foundation was established by the family of Molly Russell, who died by suicide in 2017 at age 14 after viewing self-harm and suicide material online, including on Instagram. Burrows says parents want to know when their child struggles. He warns that sudden alerts could leave families distressed and unprepared for sensitive conversations.
Meta says it will attach expert resources to every alert. The company says the materials will guide parents through difficult discussions. Ian Russell, who chairs the foundation, questions whether the guidance will be effective. He says a parent receiving the alert at work could panic. Written guidance alone may not calm immediate fear.
Experts Call for Preventive Measures
Charities say the alert system highlights deeper problems on the platform. Ged Flynn, chief executive of Papyrus Prevention of Young Suicide, welcomes the alerts but calls for stronger preventative action. He says young people still encounter harmful content online.
Flynn notes that parents contact his charity daily, worried about children’s online activity. Families want platforms to prevent dangerous material from appearing in the first place, not just alert them afterward.
Leanda Barrington-Leach, executive director of 5Rights Foundation, urges Meta to redesign systems with child safety as the default. Burrows cites research from his foundation showing Instagram still recommends harmful content about depression, suicide, and self-harm to vulnerable teens.
He stresses companies must address systemic risks instead of shifting responsibility to parents. Meta disputes the foundation’s September report, claiming it misrepresents the company’s teen safety and parental support efforts.
Global Pressure on Social Media
Instagram designed Teen Account alerts to detect sudden changes in search behavior. Meta says the system builds on existing safety measures. The platform already hides self-harm and suicide content and blocks related searches.
Parents will receive alerts via email, text, WhatsApp, or directly in the app. Meta chooses the method based on contact details provided. The company acknowledges the system may occasionally trigger alerts without serious cause. It says it prefers caution when protecting young users.
Sameer Hinduja, co-director of the Cyberbullying Research Center, says alerts will naturally alarm parents. He emphasizes practical guidance must follow immediately. Companies cannot leave families alone with fear. Hinduja says Meta understands that responsibility.
Instagram also plans to extend alerts to interactions with its AI chatbot. The company notes that teens increasingly turn to artificial intelligence tools for support. Governments worldwide continue pressuring social media platforms to improve child safety.
Australia has banned social media for children under 16. Spain, France, and the UK are considering similar rules. Regulators closely monitor how tech companies engage young audiences. Meta chief executive Mark Zuckerberg and Instagram head Adam Mosseri recently appeared in a US court defending the company against claims it targeted underage users.
