UK Parliament Calls for Social‑Media Regulation Comparable to Unsafe Toys
#Regulation

UK Parliament Calls for Social‑Media Regulation Comparable to Unsafe Toys

Regulation Reporter
3 min read

The Science, Innovation and Technology Committee urges ministers to treat social‑media platforms as high‑risk products, demanding enforceable age‑verification, algorithmic safeguards and expanded coverage under the Online Safety Act, with draft legislation expected in the next parliamentary session.

Regulatory action

The Science, Innovation and Technology Committee has written to Secretary of State for Digital, Culture, Media & Sport Liz Kendall and Minister for Online Safety Kanishha Narayan urging the government to amend the UK Online Safety Act 2023 so that social‑media services are regulated in the same way as products deemed unsafe for children (e.g., certain toys, chemicals, or e‑cigarettes). The committee cites a growing body of evidence linking platform use to mental‑health harms, self‑harm, and exposure to illegal content among under‑16s.

Featured image

What it requires

Requirement Detail Compliance implication
Risk‑based product classification Social‑media platforms must be classified as “high‑risk digital products” under the Online Safety Act. This triggers mandatory safety‑by‑design obligations similar to those for unsafe toys. Companies will need to conduct a Product Safety Assessment (PSA) for each service, documenting how design choices (e.g., infinite scroll, push notifications) could cause harm to children.
Effective, privacy‑preserving age verification Age checks must use government‑approved verification methods (e.g., digital identity, biometric token) rather than self‑declaration or simple visual tricks. Platforms must integrate an approved verification API by the deadline and retain only the minimal data required for age proof, deleting it after verification.
Algorithmic transparency and mitigation Providers must publish a Algorithmic Impact Statement (AIS) covering recommendation engines, ranking logic, and any addictive design features. The AIS must outline steps to neutralise harmful content amplification. Failure to disclose or to implement mitigations will constitute a breach of the Online Safety Duty of Care, attracting fines up to £10 million or 4 % of global turnover.
Mandatory illegal‑content filtering Real‑time detection and removal of illegal material (e.g., child sexual abuse, extremist propaganda) must be demonstrable through independent audits. Companies must retain audit logs for at least 12 months and submit quarterly compliance reports to the Digital, Culture, Media and Sport (DCMS) regulator.
Extension to AI‑driven chatbots The current Act’s scope will be broadened to include AI chatbots that operate on closed data sets, ensuring they are subject to the same safety standards. Developers of large‑language models hosted on social platforms must implement content‑safety layers and undergo the same PSA process as the platform itself.

Compliance timeline

Milestone Date Action required
Draft amendment publication 1 September 2026 (expected) Review the proposed statutory language and begin internal impact assessments.
Consultation closure 31 October 2026 Submit feedback on practical implementation of age‑verification APIs and AIS templates.
Statutory instrument enactment 1 March 2027 Legal teams must update terms of service, privacy policies, and user‑onboarding flows to reflect new obligations.
First‑stage compliance deadline 30 June 2027 Deploy approved age‑verification solution and publish the initial AIS for each platform.
Full compliance deadline 31 December 2027 Complete PSA for all services, implement mandatory content‑filtering pipelines, and submit the first annual audit to DCMS.

What organisations should do now

  1. Map current product risk – Classify each social‑media offering against the new “high‑risk” definition.
  2. Engage with the UK‑approved age‑verification framework – Begin technical integration tests with the GovTech Identity API (see the official guidance).
  3. Start drafting an Algorithmic Impact Statement – Use the template released by the Information Commissioner’s Office (ICO) (download here).
  4. Audit existing content‑moderation tooling – Identify gaps in illegal‑content detection and plan for third‑party audit contracts.
  5. Prepare for AI‑chatbot coverage – Review any in‑house language models for compliance with the forthcoming AI safety addendum.

By treating social‑media platforms as products that can cause physical or psychological injury, the UK government aims to shift responsibility from voluntary goodwill to enforceable duty of care. Companies that adapt early will avoid the steep fines and reputational damage that could follow a breach of the amended Online Safety Act.

Comments

Loading comments...