Instagram has expanded its content restriction system for teenage users to all countries worldwide, the social media giant announced on Thursday. The move follows recent legal action against parent company Meta in the United States, where it was held accountable for harming teens.
The policy, first tested in nations including Australia, Canada, the UK, and the US last October, is designed to show less content containing themes of extreme violence, sexual nudity, and graphic drug use to accounts belonging to users under 18. Posts featuring strong language, certain risky stunts, or marijuana paraphernalia will also be hidden or not recommended.
Stricter "Limited Content" Setting Introduced
Alongside the global rollout, Meta introduced a new "Limited Content" setting for teen accounts. This feature enforces stricter filters and prevents teenagers from seeing, leaving, or receiving comments under certain posts. The company stated the restrictions are inspired by content standards similar to those used for movies rated appropriate for audiences aged 13 and over.
"Just like you might see some suggestive content or hear some strong language in a movie rated for ages 13+, teens may occasionally see something like that on Instagram, but we’re going to keep doing all we can to keep those instances as rare as possible," Meta said in an official blog post. The company acknowledged that "no system is perfect" and committed to ongoing improvements.
Meta Shifts Away from "PG-13" Branding After Legal Challenge
The initiative was initially marketed by Meta as "PG-13-inspired limits." However, the Motion Picture Association (MPA) sent a cease-and-desist letter demanding the company stop using the term, arguing that a film rating system cannot be directly compared to social media content. In its latest communication, Meta noted that "there are differences between movies and social media" and said its ratings reflect settings that feel closer to the "Instagram equivalent" of a teen-appropriate movie.
This global expansion arrives amidst intense scrutiny of Meta's approach to teen safety. Recent court filings revealed the company waited years to implement a feature for automatically blurring explicit images in direct messages, despite being aware of the issue.
A Defensive Posture on Teen Safety
Meta has launched several other teen safety measures in recent months, including notifying parents if teens search for self-harm content, introducing new parental controls for AI experiences, and pausing teen access to AI characters while a new version is developed. The international expansion of content restrictions is seen as a preventive step, potentially aimed at heading off further regulatory and legal challenges similar to those faced in New Mexico and Los Angeles.
Analysts suggest the company is attempting to balance product growth with enhanced safety protocols as global regulators increase their focus on how social media platforms protect younger users.