Swedish PM's AI 'Second Opinion' Sparks Ethical and Security Concerns
Share this article
Sweden's Prime Minister Ulf Kristersson has ignited a firestorm in tech and political circles by disclosing his routine use of generative AI tools—including ChatGPT and France's LeChat—to seek "a second opinion" on governmental matters. In an interview with Dagens industri, Kristersson stated: "I use it myself quite often. If for nothing else than for a second opinion. What have others done? And should we think the complete opposite?"
The revelation prompted immediate backlash from AI ethicists and computer scientists. Simone Fischer-Hübner, a cybersecurity researcher at Karlstad University, warned Aftonbladet newspaper: "You have to be very careful," highlighting risks of inputting sensitive information into opaque AI systems. Sweden's largest newspaper, Aftonbladet, accused the PM of succumbing to "the oligarchs’ AI psychosis" in a scathing editorial.
Virginia Dignum, Professor of Responsible AI at Umeå University, delivered the most incisive critique to Dagens Nyheter: "The more he relies on AI for simple things, the bigger the risk of overconfidence in the system. It’s a slippery slope... AI isn’t capable of meaningful political opinions—it simply mirrors its creators' biases. We must demand reliability guarantees. We didn’t vote for ChatGPT."
Kristersson's spokesperson Tom Samuelsson later clarified that the PM avoids security-sensitive data in AI queries, describing usage as merely "a ballpark" reference. Yet this defense fails to address core concerns about:
1. Data sovereignty: Potential exposure of state deliberations to private AI vendors
2. Accountability: Lack of transparency in AI-generated "opinions" influencing policy
3. Competence erosion: Risk of delegating critical thinking to statistically-driven systems
The controversy underscores a critical inflection point for global governments: As AI permeates decision-making layers, we must establish rigorous frameworks differentiating informational augmentation from unaccountable delegation. Technical audiences recognize this isn't merely about ChatGPT—it's about defining guardrails before mission-critical systems inherit the biases and hallucinations of large language models. Sweden's debate may become the blueprint for nations navigating the thin line between AI assistance and abdication of democratic responsibility.
Source: The Guardian, August 5, 2025