On Tuesday, Instagram announced its most significant effort yet to safeguard young users by introducing new “teen account” settings. These changes will automatically switch millions of teen accounts to private and restrict the types of content they can access on the platform.
This update comes nearly three years after the “Facebook Papers” revealed the risks Instagram poses to young users. The new restrictions are also aimed at encouraging teens to enable parental supervision through the app. Under the update, all users under 18 will be affected.
While 16- and 17-year-olds will still have the option to revert to their preferred settings manually, users aged 13 to 15 will require parental approval to make such changes.
The new “teen account” settings build on more than 30 well-being and parental oversight tools that Meta, Instagram’s parent company, has introduced in recent years. These include features like “take a break” reminders and restrictions on age-inappropriate content, such as posts related to eating disorders.
Related Topic: Google Succeeds in Challenging $1.7 Billion EU Competition Fine
Despite these efforts, Meta has faced criticism for placing too much responsibility on parents and teens themselves for ensuring safety. For example, the parental supervision tools depend on teens notifying their parents about their use of the app.
Pressure on Meta to better protect teens intensified after a new whistleblower, former Facebook employee Arturo Bejar, testified at a November Senate subcommittee hearing. Bejar revealed that Meta’s top executives, including CEO Mark Zuckerberg, had ignored warnings for years about the harms its platforms pose to teens.
Recent court documents from lawsuits against the company also claim that Zuckerberg repeatedly blocked initiatives aimed at improving teen well-being, that Meta knowingly allowed children under 13 to maintain accounts, and that the platform has facilitated child predators.
In a Senate hearing in January, Zuckerberg apologized to families whose children had been harmed by social media.
Meta says its latest changes are designed to address parents’ main concerns—who their teens are interacting with online, the content they’re exposed to, and whether their time is being well spent.
The new “teen accounts” update will automatically set all accounts for users under 18, both new and existing, to private with the strictest messaging settings. Under the update, teen users will only be able to receive messages from people they’re already connected to. Additionally, Instagram will limit who can tag or mention teens in posts, restricting this to people the teens follow.
Related Topic: Trump Escalates Rhetoric After Second Apparent Assassination Attempt
Teens will also be placed under Instagram’s most restrictive content control settings. This update will limit their exposure to “sensitive” content on their Explore page and in Reels, such as posts promoting cosmetic procedures. Instagram had previously implemented similar measures on a smaller scale earlier this year.
Additionally, teen users will receive reminders to take breaks after spending one hour on the app each day. The app will also activate “sleep mode” by default, muting notifications and sending auto-replies to direct messages between 10 p.m. and 7 a.m.
Starting next week, Instagram will begin applying these changes to teen accounts in select countries, including the United States. The app will introduce new features to its parental supervision tool, allowing parents to view recent messages their teens have sent, set daily time limits, block access during specific times, and monitor the content topics their teens are interested in.
The updates are expected to be rolled out to all teen accounts in the United States, the United Kingdom, Canada, and Australia within the next 60 days, with plans to expand to other countries later this year and next.
However, the effectiveness of some of these changes may be limited by a key issue: Meta has no reliable way to verify whether it is indeed a parent monitoring a teen’s account or if it is an older friend pretending to be a parent. Meta does not conduct formal parent verification but relies on indicators such as the adult user’s birthdate and the number of accounts they oversee to determine if they should be allowed to supervise a teen’s account, according to a spokesperson.
Related Topic: Air Pollution in Dhaka City: A Comprehensive Journalistic Exploration
The company has also faced criticism for not doing enough to prevent teens from circumventing safety restrictions by falsifying their age when creating new accounts.
To address this, Meta is implementing artificial intelligence technology designed to identify teen accounts that may have incorrectly listed an adult birthdate.
Meta asserts that these new features were developed with input from its Safety Advisory Council—comprising independent online safety experts and organizations—along with youth advisors and feedback from teens, parents, and government officials.
Instagram will force Protected Accounts for Millions of Teens
Instagram announced on Tuesday its most dramatic initiative yet to enhance the safety of young users on its platform. The new “teen account” settings will automatically make millions of teen accounts private and restrict the types of content these users can view.
Background and Context
This update comes nearly three years after the “Facebook Papers” highlighted the risks Instagram poses to young users. The changes aim to push teens towards adopting parental supervision through the app.
New Restrictions for Teen Accounts
- Automatic Privacy Settings: All users under 18 will have their accounts automatically set to private, with strict messaging settings.
- Age-Based Permissions: Users aged 16 and 17 can manually adjust their settings, while users aged 13 to 15 will need parental approval for any changes.
- Content Control: Teen accounts will be placed under Instagram’s most restrictive content settings, limiting exposure to sensitive content like posts promoting cosmetic procedures.
- Time Management: Teen users will receive reminders to take breaks after one hour of app usage each day. The app will also activate “sleep mode,” muting notifications and auto-replying to direct messages between 10 p.m. and 7 a.m.
Rollout and Additional Features
- Geographic Rollout: The changes will start being applied to teen accounts in select countries, including the United States, next week. They are expected to expand to the United Kingdom, Canada, and Australia within the next 60 days, with further expansion planned for other countries later this year and next.
- Enhanced Parental Supervision Tools: The app will introduce new features allowing parents to view recent messages, set daily time limits, block access during specific times, and monitor content topics.
Challenges and Criticisms
Despite these changes, the effectiveness may be limited by Meta’s inability to verify if the account supervisor is indeed a parent or an older friend. Meta relies on signals such as the adult user’s birthdate and the number of accounts they oversee but does not conduct formal parent verification.
Meta has faced ongoing criticism for not preventing teens from falsifying their age to bypass safety restrictions. To address this, the company is implementing AI technology to identify teen accounts that may have incorrectly listed an adult birthdate.
Development and Consultation
Meta states that the new features were developed with input from its Safety Advisory Council, which includes independent online safety experts and organizations, as well as feedback from youth advisors, other teens, parents, and government officials.

