Meta is overhauling its age-verification protocols after a wave of incidents revealed that children could easily bypass safety restrictions using simple tricks—such as drawing on a fake mustache to appear older. In response, the tech giant is deploying a new AI-driven system designed to analyze visual and behavioral cues on Instagram and Facebook to identify and remove accounts belonging to users under 13.

This shift marks a significant departure from Meta’s previous reliance on self-reported data, which has proven ineffective against determined minors. The new approach aims to close loopholes that allow children to access platforms intended for older audiences, driven by both internal security goals and mounting external regulatory pressure.

Beyond Self-Reporting: A Multi-Layered Approach

The core weakness of traditional age verification was its dependence on users honestly stating their birth dates. As the “fake mustache” incident highlighted, this method is easily circumvented. Meta’s new strategy employs a combination of AI tools to estimate age through contextual indicators and visual analysis.

The system now scans:
* Textual Clues: Posts, comments, bios, and descriptions are analyzed for references to school years, birthday celebrations, or other age-specific markers.
* Visual Cues: AI examines images and videos for physical traits such as height and bone structure.

Crucially, Meta clarifies that this is not facial recognition. The technology does not identify specific individuals. Instead, it assesses general physical characteristics to estimate age. By combining these visual insights with textual analysis, Meta aims to significantly increase the accuracy of its detection systems.

Consequences for Underage Accounts

If the AI suspects an account is managed by a child under 13, the profile will be suspended. To regain access, the user must revalidate their age through established verification procedures. Failure to do so results in permanent deletion of the account.

Meta is also expanding its focus to slightly older users. The company plans to automatically assign “teen accounts” to users aged 13 to 15. These profiles will come with default content restrictions and parental controls, creating a safer digital environment for this demographic without requiring manual setup by parents.

Regulatory Pressure and Global Expansion

The timing of these measures is strategic. Meta’s enhanced verification efforts are largely a response to a preliminary ruling by the European Commission, which found the company in breach of the Digital Services Act (DSA). The EU regulator concluded that Meta’s existing mechanisms were insufficient for preventing under-13s from using its platforms.

This regulatory scrutiny is supported by data showing how easily children bypass current controls. A survey by the nonprofit Internet Matters found that:
* 46% of children aged 9–16 believe circumventing age controls is “very easy.”
* 32% admitted to actually breaking the rules to access social media.

In light of these findings, Meta is expanding its age-verification technology globally. After initial rollouts in the US, Australia, Canada, and the UK in 2024, the system is now being extended to:
* Instagram users in Brazil and 27 European Union countries.
* Facebook users in the US, with plans to expand to the EU and UK next month.

Why This Matters

The struggle between tech platforms and underage users highlights a broader challenge in digital safety: verification without identification. Meta’s move represents an attempt to balance privacy concerns with the legal and ethical obligation to protect minors. However, as children find increasingly creative ways to evade detection, the effectiveness of these AI systems will remain under constant scrutiny.

“By combining these visual insights with our analysis of text and interactions, we can significantly increase the number of underage accounts we identify and remove.” — Meta

Conclusion

Meta’s shift toward AI-driven age verification is a direct response to both regulatory mandates and the practical failures of self-reported age systems. While the new tools promise greater accuracy, the ongoing cat-and-mouse game between platform safety measures and determined users suggests that digital age verification will remain a complex, evolving challenge.