The UK government's consultation on banning under-16s from social media has sparked warnings from digital rights groups that the policy would effectively build a mass age-verification system for the entire internet, creating serious privacy risks for all users.
The UK government's proposal to ban under-16s from social media platforms is raising alarm bells among digital rights advocates, who warn the policy would necessitate a mass age-verification system for the entire internet, creating what the Open Rights Group (ORG) calls "serious risks to privacy, data protection, and freedom of expression."
The government has opened a public consultation on ways to "drive action to improve children's relationship with mobile phones and social media," a broad initiative that extends far beyond a simple age limit. Ministers are asking whether to restrict addictive platform features like infinite scroll, raise the digital age of consent, tighten enforcement of school phone bans, and consider blocking under-16s from major social platforms.
This consultation follows weeks of increasingly vocal calls in Westminster to address children's screen time with more dramatic measures than additional guidance notes. A group of 61 Labour backbench MPs published an open letter supporting a ban similar to Australia's on Monday, and Prime Minister Keir Starmer has signaled that "no option is off the table" when it comes to online safety.
The Privacy Problem with Age Verification at Scale
The Open Rights Group warns that a ban would require platforms to verify age at scale, with all the privacy and security downsides that entails. Age gating at this level would drag millions of adults and older teens into proving their identity to private corporations simply to post, message, or read online, multiplying the data collection risks that already plague Big Tech.
"We already know these systems are risky," said James Baker, Platform Power and Free Expression programme manager at Open Rights Group. He pointed to last year's breach of sensitive age-verification data collected by Discord as a cautionary tale of how personal information can be exposed, misused, or repurposed.
Age-assurance technology remains lightly regulated in the UK, despite repeated warnings from rights groups. These systems often rely on identity documents, facial analysis, or inferred profiling that can have long-term consequences for privacy and security once collected. Even strong data protection laws offer little solace when the premise of a system is to gather more personal data, not less.
Beyond Social Media: A Comprehensive Digital Identity System
One of the Lords' amendments to the Children's Wellbeing and Schools Bill would push the idea further than some ministers might have intended, potentially banning under-16s from social functions in online games, messaging services like WhatsApp, and even collaborative platforms like Wikipedia.
"This goes far beyond Australia's experiment in banning under-16s from social media," ORG warned. The proposed measures could effectively require age verification for accessing a wide range of online services, creating a de facto digital identity system for all internet users.
The government insists it's not rushing to block kids from social media outright. Instead, the consultation sweeps up everything from endless scroll and school phone rules to a possible rethink of digital consent, with ministers repeatedly pointing back to the Online Safety Act.
The Technical Reality of Age Verification
For digital rights campaigners, the government's approach misses the fundamental point. They argue that the problem isn't young people existing online, but platforms designed to keep them hooked through addictive design patterns. Banning under-16s risks hard-wiring surveillance into everyday internet use.
The technical implementation of such a ban would likely require:
Identity Document Verification: Platforms would need to collect government-issued IDs, creating centralized databases of sensitive personal information.
Biometric Analysis: Facial recognition or liveness detection to prevent ID sharing, introducing biometric data collection at scale.
Continuous Monitoring: Systems to detect when verified users share credentials or when new accounts are created.
Cross-Platform Data Sharing: To prevent users from simply moving to alternative platforms, verification data might need to be shared across services.
Each of these steps introduces new attack vectors for data breaches and expands the digital footprint of every internet user. The Discord breach mentioned by ORG demonstrated how even platforms with legitimate age-verification needs can fail to protect sensitive data.
The Australian Precedent and Its Limitations
Australia's ban on under-16s from social media, which the UK is considering emulating, has faced its own challenges. The Australian implementation requires platforms to use "reasonable steps" to verify age, but the technical standards remain vague. Critics note that determined teens can bypass these measures using VPNs, borrowed credentials, or simply lying about their age.
More importantly, the Australian approach hasn't addressed the core design issues that make social media addictive. Features like infinite scroll, variable reward schedules, and algorithmic content curation remain unchanged, meaning the platforms remain optimized for engagement regardless of user age.
Alternative Approaches: Privacy-Preserving Solutions
Digital rights advocates suggest alternative approaches that don't require mass surveillance:
Device-Level Controls: Parental controls and operating system features that allow families to manage screen time without exposing identity data.
Platform Design Changes: Mandating changes to addictive features that benefit all users, not just minors.
Education and Media Literacy: Investing in digital literacy programs that teach critical thinking about online content.
Privacy-Preserving Age Verification: Exploring zero-knowledge proofs or other cryptographic methods that verify age without revealing identity.
The Broader Context: Online Safety Act and Digital Rights
The consultation comes amid ongoing debates about the UK's Online Safety Act, which critics argue is more about censorship than safety. The Act gives regulators broad powers to remove content deemed harmful, raising concerns about freedom of expression.
Digital rights groups argue that the same infrastructure built for age verification could be repurposed for other forms of content control, creating a slippery slope toward comprehensive digital surveillance.
What Happens Next
The government's consultation is ongoing, with digital rights groups, tech companies, and child safety organizations all submitting evidence. The final policy decision will likely balance competing concerns about child safety, privacy, and practical implementation.
For now, the debate highlights a fundamental tension in internet governance: how to protect vulnerable users without creating surveillance systems that affect everyone. As the UK moves forward with its plans, the technical and privacy implications of age verification at scale will remain at the center of the discussion.
The outcome will likely set a precedent for how other countries approach similar challenges, making the UK's decision particularly significant for the future of online privacy and digital rights globally.

Comments
Please log in or register to join the discussion