Instagram will begin notifying parents if their teenager repeatedly searches for terms related to suicide or self-harm within a short timeframe, the social media platform announced on Thursday. The alerts, launching in the coming weeks, are aimed at parents enrolled in the app's parental supervision tools. The move comes as Meta, Instagram's parent company, faces multiple lawsuits alleging its platforms have harmed young users.
The feature is designed to flag concerning behaviour even though such searches are already blocked from returning results. "These new alerts are designed to make sure parents are aware if their teen is repeatedly trying to search for this content so that they can support their teen," the company stated.
How the Alert System Will Work
Searches that could trigger a notification include phrases that encourage self-harm, indicate a teen might be at risk, or contain specific terms like "suicide". Parents will receive the alert via email, text, WhatsApp, or an in-app notification, based on their provided contact details. The notification will include resources to help parents approach conversations with their child.
The alerts will first roll out in the U.S., U.K., Australia, and Canada next week, with plans to expand to other regions later this year. Instagram stated it consulted with its Suicide and Self-Harm Advisory Group to set a threshold that requires "a few searches within a short period of time" to avoid unnecessary notifications that could reduce the feature's effectiveness.
Context of Ongoing Legal Scrutiny
The announcement coincides with significant legal pressure on Meta. In ongoing testimony in a U.S. District Court in California, Instagram head Adam Mosseri was questioned over the delayed rollout of basic safety features for teens. Separately, a lawsuit in Los Angeles County Superior Court revealed an internal Meta study found parental controls had "little impact on kids’ compulsive use of social media."
Instagram addressed the balance between caution and over-notification in a blog post: "We feel — and experts agree — that this is the right starting point, and we’ll continue to monitor and listen to feedback."
Future Developments and Broader Implications
Looking ahead, Instagram plans to extend these notifications to instances where a teen attempts to engage the platform's AI in conversations about suicide or self-harm. The development underscores the growing demand for tech companies to implement more robust protective measures for younger users amidst a global debate on social media's impact on mental health.