Governments worldwide are intensifying efforts to restrict children’s access to social media, fueled by growing concerns over platform accountability and the potential harms to young users. TikTok has become the latest major tech company to yield to regulatory pressure, announcing a new age-detection system for Europe designed to keep users under 13 off the platform. This move signals a broader trend towards stricter digital age limits globally.

The system, tested over the past year in the UK, utilizes a multi-layered approach: analyzing profile data, content, and user behavior to identify potential underage accounts. Unlike outright bans, TikTok’s approach flags suspicious profiles for human review, avoiding automatic account termination. The company has not publicly disclosed specific metrics, but the rollout is occurring amid mounting scrutiny of social media’s impact on children.

Australia led the charge last year by enacting the first national ban on social media for users under 16, including platforms like Instagram, YouTube, Snap, and TikTok. The European Parliament is advocating for mandatory age limits, while Denmark and Malaysia are considering similar bans for those under 16. This momentum reflects a growing recognition that current self-regulation by tech companies is insufficient.

“We are conducting an uncontrolled experiment where tech giants have unfettered access to children’s attention, with minimal oversight,” stated Christel Schaldemose, a Danish lawmaker, during a recent parliamentary session. The debate centers on whether governments should impose strict age-based rules or allow platforms to continue self-regulation.

Advocacy groups are also pushing for stronger measures. In Canada, calls are growing for a dedicated regulatory body to address online harms affecting young people, particularly in light of AI-generated deepfakes circulating on platforms like X. Even ChatGPT is implementing age-prediction software to apply appropriate safeguards. In the US, 25 states have introduced age-verification legislation, indicating widespread concern at the state level.

However, legal experts caution against overreach. Eric Goldman, a law professor at Santa Clara University, argues that government-mandated censorship must be viewed with skepticism. “Unless something dramatically changes, regulators worldwide are building a legal infrastructure requiring age authentication for most websites and apps.” The question isn’t if age verification will become standard, but how.

TikTok’s approach—surveillance instead of outright bans—is sparking debate. Critics argue it amounts to mass user monitoring. “This is a fancy way of saying TikTok will surveil its users and infer their ages,” Goldman noted. The risk of false positives and privacy violations remains significant. The company acknowledges no universally accepted method exists to verify age without compromising user privacy.

Alice Marwick, director of research at Data & Society, agrees that surveillance-based systems inevitably expand data collection without demonstrably improving youth safety. The core problem isn’t the technology itself but whether large-scale age-gating is the right solution. Probabilistic guesses about age are prone to errors and biases, potentially disproportionately impacting marginalized groups.

The hypocrisy is not lost on legal scholars, who point out that forcing children to disclose sensitive data for verification increases their exposure to security risks. The EU often serves as a testing ground for global tech policies, and other countries are already taking note.

“The internet was once borderless, but we’re seeing a shift,” says Lloyd Richardson, director of technology at the Canadian Centre for Child Protection. “Australia’s approach—a complete delay until age 16—is the most effective.” While a full ban in Canada seems unlikely soon, the Online Harms Act, which proposed a digital safety oversight board, highlights the growing legislative momentum.

The debate extends beyond technology. Experts argue that age verification alone cannot solve the underlying societal and policy challenges. The question isn’t just how to verify age but whether age-gating is the most effective way to protect children online. The current system creates friction and data collection without necessarily improving outcomes.

TikTok relies on third-party verification vendor Yoti, which has faced criticism for excessive data collection. While Yoti claims to delete images after age checks, concerns about privacy and data security persist.

Ultimately, the path forward remains uncertain. While TikTok’s new approach may align with EU regulations, its scalability and legal viability in the US are questionable, given First Amendment challenges. The global trend towards age verification is undeniable, but the optimal solution—whether through bans, surveillance, or a combination of both—remains a subject of intense debate.