TikTok to Strengthen Age-Verification Technology Across EU
Tech Beetle briefing GB

TikTok to Strengthen Age-Verification Technology Across EU

Essential brief

TikTok to Strengthen Age-Verification Technology Across EU

Key facts

TikTok is launching advanced age-verification technology across the EU to better identify users under 13.
The system combines automated analysis with human moderation to decide on account removals.
This move follows global calls for social media bans or restrictions for users under 16, inspired by Australia’s recent ban.
European regulators and governments are increasingly scrutinizing social media platforms’ age-verification practices.
TikTok’s initiative may influence global standards for protecting children on social media platforms.

Highlights

TikTok is launching advanced age-verification technology across the EU to better identify users under 13.
The system combines automated analysis with human moderation to decide on account removals.
This move follows global calls for social media bans or restrictions for users under 16, inspired by Australia’s recent ban.
European regulators and governments are increasingly scrutinizing social media platforms’ age-verification practices.

TikTok is set to roll out enhanced age-verification technology across the European Union in the coming weeks, responding to growing demands for stricter controls on underage social media use. This move follows rising calls in multiple countries, including the UK, for measures similar to Australia’s recent ban on social media access for users under 16. The new system, piloted quietly over the past year within the EU, uses a combination of profile data, posted content, and behavioral signals to estimate whether an account belongs to a user under the age of 13. Unlike automatic bans, accounts flagged by this technology will undergo review by specialist moderators who will decide whether to remove the account, aiming to balance safety with fairness.

This development comes amid increased scrutiny of how social media platforms verify user ages under data protection regulations in Europe. TikTok collaborated closely with Ireland’s Data Protection Commission, its primary EU privacy regulator, to ensure compliance with regional rules. The UK has also been exploring stricter regulations, with Prime Minister Keir Starmer expressing openness to banning social media for young people due to concerns over excessive screen time and potential harm to children under 16. Starmer’s stance marks a shift from his previous opposition, which was based on enforcement challenges and fears that bans might drive youths toward less regulated spaces like the dark web.

Australia’s social media ban for under-16s, implemented in December 2025, has already resulted in the removal of more than 4.7 million accounts across ten platforms, including TikTok, YouTube, Instagram, Snapchat, and Facebook. This precedent has intensified global conversations about protecting children online. Other countries are considering similar measures: Denmark aims to ban social media use for those under 15, while the European Parliament advocates for age limits to safeguard minors. These efforts reflect growing awareness of the risks social media poses to young users, including exposure to harmful content and mental health impacts.

TikTok’s approach differs from some competitors; for example, Meta uses third-party verification services like Yoti to confirm user ages on Facebook. Meanwhile, investigations have revealed inconsistencies in enforcement—such as moderators allowing under-13s to remain on TikTok if they claimed parental oversight. The new EU-specific system aims to address these gaps by leveraging advanced technology combined with human moderation. This dual approach seeks to improve accuracy in identifying underage users while respecting privacy and regulatory standards.

The broader implications of TikTok’s enhanced age-verification system highlight the complex balance between protecting children online and maintaining access to digital platforms. As governments and regulators push for stricter controls, social media companies face increasing pressure to innovate responsibly. The rollout in the EU may serve as a model for other regions considering similar regulations, potentially reshaping how age verification is handled globally. Meanwhile, parental rights advocates continue to call for more tools to manage children’s social media use, especially in tragic cases linked to online challenges and content.

In summary, TikTok’s strengthened age-verification technology reflects a significant step toward addressing the challenges of underage social media use. It aligns with a growing international trend toward stricter regulation and highlights the evolving role of technology and policy in safeguarding young users online.