Gartner analyst Dennis Xu recommends blocking Microsoft Copilot on Friday afternoons when users may be too tired to check its potentially toxic or incorrect outputs, while also warning about broader security risks including oversharing, malicious prompts, and data exposure.
Gartner analyst Dennis Xu has proposed a half-joking but potentially serious suggestion: banning Microsoft Copilot use on Friday afternoons. Speaking at the Gartner Security & Risk Management Summit in Sydney, Xu warned that by week's end, workers may be too tired or rushed to properly review the AI tool's outputs, potentially leading to the sharing of offensive or incorrect content.
The recommendation came during Xu's presentation on "Mitigating the Top 5 Microsoft 365 Copilot Security Risks," where he identified several critical vulnerabilities in the AI assistant's deployment.
The Friday Afternoon Problem
Xu's fifth and final risk centered on Copilot's tendency to produce "toxic" content—information that, while factually correct, may be culturally unacceptable in workplace or customer contexts. He emphasized that all Copilot outputs require human validation before sharing, noting that Friday afternoons present a particular risk when employees are eager to finish their workweek.
"I keep telling Microsoft to build a single de-risking layer," Xu said, suggesting that organizations might want to implement a complete ban during the final hours of the workweek when users are most likely to skip crucial review steps.
Oversharing and Access Control Risks
The bulk of Xu's presentation focused on what he called Copilot's most significant security risk: exposing content whose creators didn't set appropriate sharing permissions. The AI tool's ability to search SharePoint sites and surface documents can amplify existing oversharing problems.
Xu illustrated this with a concerning example: a worker using Copilot to search for information about organizational changes might receive a response containing confidential documents about an imminent reorganization. This occurs because SharePoint offers two overlapping access control mechanisms—labels and access control lists—both susceptible to user error.
Broader Security Vulnerabilities
Beyond oversharing, Xu identified three additional risks:
Remote Execution Through Malicious Prompts: Attackers could use instruction filters and code injection to compromise systems through Copilot. Xu recommended using Copilot's instruction filters and restricting access to potential sources of malicious prompts, such as email.
Access to Sensitive Data: When users link Copilot to third-party SaaS applications, the AI tool can access sensitive information. While Microsoft's Web content plugin is enabled by default, the plugin for connecting to third-party applications is disabled. Xu advised allowing Copilot to interact with SaaS sources only when strictly necessary.
Prompt Injection Attacks: Organizations encouraging AI experimentation may inadvertently enable users to conduct prompt injection attacks, instructing LLM-powered chatbots to ignore safety guardrails. Xu recommended controlling this risk through policy, education, and content safety filters available in Azure OpenAI service.
The Broader Context
Xu's warnings come amid growing concerns about AI security. Recent reports have highlighted how chatbots can be manipulated to provide harmful information, with one study showing most chatbots would help plan school shootings and other violence when prompted.
Microsoft has faced additional scrutiny over Copilot's capabilities, including a critical Excel bug that weaponizes Copilot Agent for zero-click information disclosure attacks. The company has also drawn criticism from students after removing some models from the free Copilot plan.
Practical Recommendations
For organizations deploying Microsoft Copilot, Xu's advice suggests a multi-layered approach to security:
- Enable Microsoft's built-in content filters
- Train users to always validate AI outputs
- Monitor user access to restricted content
- Use automated discovery tools for over-shared content
- Implement strict policies around third-party integrations
- Consider time-based restrictions during high-risk periods
While the Friday afternoon ban may seem extreme, it highlights a fundamental challenge in AI deployment: ensuring human oversight when users are most likely to cut corners. As organizations rush to adopt AI tools for productivity gains, Xu's warnings serve as a reminder that proper security controls and user education remain essential components of any AI strategy.
For IT security teams, the recommendation to potentially ban Copilot on Friday afternoons may be worth considering as part of a broader risk management strategy, particularly for organizations handling sensitive customer data or operating in regulated industries where errors could have significant consequences.
The debate over AI security controls is likely to intensify as these tools become more deeply integrated into business workflows. Xu's presentation suggests that while AI can enhance productivity, it also introduces new security challenges that require careful consideration and proactive mitigation strategies.

Comments
Please log in or register to join the discussion