UK Parliament committee finds AI-generated misinformation shaped real-world policing decisions, leading to senior officer resignations and a force-wide ban on Microsoft Copilot.
UK Parliament has delivered the official postmortem on West Midlands Police's Copilot saga, and it reads like a case study in how not to mix generative AI with public order decision-making. MPs on the Home Affairs Committee have laid out their findings on how West Midlands Police handled the November Aston Villa fixture that saw Maccabi Tel Aviv supporters barred. The force's decision leaned in part on Copilot-generated claims about disorder at a supposed West Ham match, a fixture that existed only in the chatbot's imagination but still found its way into briefing materials.
The report lays out how that duff information managed to travel further up the chain than it ever should have. MPs say claims about the fictional West Ham game ended up shaping how risk was viewed, underlining that the real problem was not just the hallucination itself but how easily it was taken at face value. The committee stops short of accusing former chief constable Craig Guildford of deliberately misleading Parliament, noting that he was not told before his evidence session on January 6 that AI had been used to generate the incorrect material. However, MPs say that by that point, the use of AI had already been disclosed internally, making it reasonable to expect that Guildford and assistant chief constable Matt O'Hara would have been properly briefed before appearing.
MPs say Guildford showed a remarkable lack of professional curiosity by failing to properly check the evidence before facing them, adding that getting the facts wrong twice points to wider due diligence failings rather than a one-off mistake. The report says it should not have taken two oral evidence sessions and a written correction to reach an accurate account, and warns that the episode raises serious questions about transparency and attention to detail within the force.
Guildford had told the committee that officers had not used AI to find the material, only to later correct the record in writing. Following criticism from Home Secretary Shabana Mahmood and others, Guildford retired at 52, and the acting chief constable moved to switch Copilot off across the organization while investigators worked out what had happened.
Looking ahead, MPs say the force needs to rebuild transparency and be far more careful about what it treats as intelligence. All of this lands at an awkward moment for policymakers. In a white paper published last month, the government set out plans to ramp up the use of AI across policing, including £115 million over the next three years for a new National Centre for AI in Policing known as Police.AI, initially focused on automating administrative work.

The committee's findings highlight a fundamental tension in modern policing: the pressure to leverage emerging technologies while maintaining the rigorous standards of evidence and accountability that public trust demands. The Aston Villa incident demonstrates how easily AI-generated misinformation can infiltrate official decision-making processes, particularly when officers lack training in critically evaluating AI outputs.
This case also raises questions about the broader implications for AI adoption in law enforcement. If a major police force can make high-stakes decisions based on fabricated information from a chatbot, what safeguards exist to prevent similar failures elsewhere? The committee's recommendation for rebuilding transparency suggests that current protocols for AI use in policing may be insufficient.
For technology leaders and policymakers, this incident serves as a cautionary tale about the risks of deploying generative AI tools without robust verification processes and clear accountability frameworks. The fact that senior officers were unaware of AI's role in generating key intelligence points to systemic communication failures that could have serious consequences in other contexts.
The timing is particularly significant given the government's push to expand AI use in policing. While automation of administrative tasks may present lower risks, the Aston Villa case demonstrates that even seemingly routine applications of AI can have far-reaching consequences when they influence operational decisions. The £115 million investment in Police.AI will need to address these fundamental trust and verification challenges if it is to succeed without similar incidents.
As police forces across the UK grapple with the balance between innovation and accountability, the West Midlands case provides a stark reminder that the human element remains crucial in AI deployment. Technology can augment decision-making, but it cannot replace the professional judgment and due diligence that the public expects from law enforcement.

Comments
Please log in or register to join the discussion