Britain’s communications regulator has secured a set of enforceable promises from X, requiring the platform to review UK reports of illegal hate and terrorist content within 24 hours and to provide quarterly compliance data, marking a decisive step toward stricter online safety enforcement.
What happened
Britain’s media regulator, Ofcom, announced that it has extracted a formal set of commitments from X (formerly Twitter) to tighten the handling of illegal hate‑speech and terrorist material posted by UK users. Under the agreement, X must:
- Review reports of suspected illegal terrorist or hate content within an average of 24 hours.
- Ensure at least 85 % of such reports are dealt with within 48 hours via a dedicated UK reporting channel.
- Provide quarterly performance data for a twelve‑month period so Ofcom can verify compliance.
- Block access in the UK to accounts operated by or on behalf of proscribed terrorist organisations when those accounts are reported for illegal content.
- Engage external experts to audit how its reporting systems function, addressing long‑standing concerns that users could not tell whether a report was even received.

The commitments were secured after Ofcom’s Online Safety team conducted intensive engagement with X and gathered evidence from civil‑society groups such as Tech Against Terrorism, Tell MAMA, and the Antisemitism Policy Trust. Those groups had repeatedly complained that illegal content remained visible on the platform despite existing UK‑law obligations.
Legal basis
The regulator is acting under the Online Safety Bill, which gives Ofcom statutory powers to enforce the UK’s legal duties on “very large online platforms” (VLOPs). The Bill requires platforms to:
- Prevent the spread of illegal content – including material that incites hatred on the grounds of race, religion, sexual orientation, or that glorifies terrorism.
- Provide transparent, effective reporting mechanisms for users to flag such content.
- Publish regular transparency reports demonstrating compliance with time‑frames and removal rates.
Failure to meet these duties can trigger enforcement actions, ranging from fines of up to £18 million (or 10 % of global turnover, whichever is higher) to the imposition of “removal notices” that force the platform to take down specific items. The commitments extracted from X are therefore not mere goodwill; they are enforceable obligations that, if breached, could lead to substantial penalties.
Impact on users and the company
For UK users
- Faster protection – The 24‑hour average review window means that reports of hate or terror content will be acted upon far more quickly than under the previous, opaque system. Victims of hate speech can expect faster removal of harmful posts and a clearer avenue for recourse.
- Greater transparency – Quarterly data submissions will be published in Ofcom’s public reports, giving users and advocacy groups a concrete view of how X is performing against its promises.
- Reduced exposure to extremist propaganda – By blocking accounts linked to proscribed terrorist groups, X will limit the reach of extremist narratives within the UK.
For X
- Operational costs – Building a dedicated UK reporting channel, hiring additional moderation staff, and integrating external expert audits will increase operating expenses. The company will also need to develop internal dashboards to generate the required quarterly metrics.
- Regulatory risk – Non‑compliance now carries a clear monetary threat. Ofcom’s enforcement powers under the Online Safety Bill mean that missed deadlines or insufficient removal rates could trigger fines that dwarf the cost of the new moderation infrastructure.
- Reputational stakes – Demonstrating compliance can help X restore trust with UK advertisers and users who have grown wary after previous allegations of lax content policing.
What changes are coming
- Dedicated UK reporting portal – X will launch a new, UK‑specific interface where users can submit reports of illegal hate or terrorist content. The portal will generate a receipt number, giving reporters proof that their complaint entered the system.
- External audit framework – Independent experts, likely drawn from NGOs and academic research groups, will be granted limited access to X’s moderation tools to assess whether reports are being triaged correctly and whether any systemic biases exist.
- Quarterly transparency dashboards – Every three months X must submit a data package to Ofcom detailing:
- Number of reports received, categorized by content type (hate, terrorist, other).
- Average handling time and the proportion resolved within 24 hours and 48 hours.
- Number of accounts blocked for terrorist affiliation.
- Enhanced account‑blocking mechanisms – When an account is identified as belonging to a proscribed terrorist organisation, X will automatically restrict its UK access, while preserving the ability for law‑enforcement to request broader takedowns if required.
Broader context
The Ofcom action follows a wave of enforcement activity across Europe, where regulators are increasingly invoking the EU General Data Protection Regulation (GDPR) and the UK’s own data‑protection statutes to hold platforms accountable for the spread of illegal content. While GDPR primarily governs personal data, its fines (up to €20 million or 4 % of global turnover) have been used as a deterrent against platforms that fail to protect user data in the context of harmful content.
In the United States, the California Consumer Privacy Act (CCPA) does not directly address hate‑speech moderation, but its emphasis on transparency and user‑control over data has spurred tech firms to adopt more open reporting practices globally. By aligning its policies with the UK’s stricter online‑safety regime, X may find it easier to meet parallel expectations in other jurisdictions.
What this means for digital rights advocates
The commitments represent a tangible win for groups campaigning for safer online spaces, but they also raise questions about oversight and accountability. Advocates will be watching:
- Whether the quarterly data truly reflects on‑the‑ground realities, or if X can game the metrics.
- How independent the external experts will be, and whether they have sufficient authority to demand corrective action.
- The balance between removing illegal content and preserving legitimate expression, especially in politically sensitive contexts.
The next twelve months will be a critical test. If X meets or exceeds the 24‑hour target, it could set a benchmark for other VLOPs operating in the UK. Failure, however, would likely trigger fines and could embolden regulators to impose even stricter duties, potentially reshaping the global moderation landscape.
*For ongoing updates on Ofcom’s investigations and X’s compliance data, follow the regulator’s official releases and the transparency reports that will be published on the Ofcom website.

Comments
Please log in or register to join the discussion