New research reveals that while AI tools eliminate time-consuming colleague interruptions, they simultaneously undermine the informal interactions that build psychological safety, belonging and innovation - with concrete strategies to balance efficiency and human connection.
The phrase is becoming alarmingly common in tech workflows: "Now I don’t have to bug [someone]." Product designers skip consulting researchers as RAG tools surface insights instantly. Product managers bypass designers for mockups, relying on AI-generated options. Engineers avoid accessibility teams, trusting real-time automated scanners. It’s framed as liberation—a removal of friction that lets individuals solve problems independently. But what if the very "bugs" being automated away—the quick Slack exchanges, the hallway clarifications, the spontaneous whiteboarding sessions—are actually the invisible scaffolding holding teams together?
This isn’t merely about lost convenience. When AI replaces those micro-interactions, it disrupts the organic energy that fuels team cohesion. Consider what vanishes: a two-minute Slack question that evolves into a twenty-minute collaborative problem-solving session; a "quick question" that surfaces a fundamental misunderstanding before it derails a project; an accessibility review that mentors a junior designer through nuanced guidance. These aren’t inefficiencies to be eradicated—they’re the building blocks of trust, belonging, and the shared understanding that transforms a group of individuals into a resilient team.
Decades of research confirm this intuition. In 2012, MIT’s Human Dynamics Lab (led by Alex Pentland) found that informal communication—hallway chats, coffee breaks, unscheduled desk visits—was the strongest predictor of team productivity, outperforming formal meetings by 35%. Teams rich in these interactions consistently delivered better outcomes. More recently, Google’s Project Aristotle, after studying 180+ teams, identified psychological safety—the belief that one can take interpersonal risks without fear—as the number one factor in high performance. This safety isn’t built in quarterly reviews; it’s cultivated through countless low-stakes moments: admitting confusion over coffee, joking about a failed experiment, asking for help without shame. AI-driven efficiency, by eliminating the need for these moments, risks starving teams of the very conditions that make them effective.
The consequences are now measurable. A 2025 study from Harvard, Columbia, and Yeshiva University tracked teams using AI coordination tools and found decreased overall performance, increased coordination failures, and eroded trust—particularly pronounced in low- and medium-skill teams during early adoption phases. When AI handles the "bugging," the human-to-human connections that establish psychological safety atrophy. As disconnection sets in, employees lose the sense of belonging that keeps them engaged. McKinsey’s Great Attrition research confirms this: not feeling connected to colleagues is a top reason people leave organizations. For a median-sized S&P 500 company, the cost of disengagement and attrition ranges from $228 million to $355 million annually in lost productivity—a staggering toll for what began as a well-intentioned efficiency gain.
The innovation impact is equally concerning. Korean researchers analyzing high-tech manufacturing firms in 2024 concluded that "weak ties"—occasional interactions with colleagues outside one’s immediate team—were critical for breakthrough innovation in fast-changing environments. These are precisely the connections AI threatens: the designer who casually asks an engineer about feasibility, the content strategist who bounces ideas off a UX researcher during lunch. When AI provides instant answers, these serendipitous collisions diminish, narrowing the pool of perspectives that fuel creative problem-solving. Teams may become more efficient at executing known tasks but less capable of adapting to novel challenges—a dangerous trade-off in volatile markets.
The solution isn’t rejecting AI but deploying it with intentionality. Leaders must distinguish between toxic friction (repetitive, draining tasks) and productive friction (the micro-interactions that build capability and connection). Here’s how:
First, target AI at true toil. Use it to eliminate soul-crushing repetition—formatting reports, basic code scaffolding, initial accessibility scans—freeing humans for higher-order work. A 2026 Harvard Business Review study of 1,488 U.S. workers found that those who used AI specifically to remove toil reported 15% lower burnout rates and higher social connection with peers. Why? Because they reclaimed time previously lost to mundane tasks, redirecting it toward collaborative problem-solving "off keyboard." The key is measuring not just time saved, but how that time is reinvested.
Second, institutionalize productive friction. Borrow a page from Pixar’s playbook: Steve Jobs designed their studios to force unavoidable encounters, knowing that eye contact and spontaneous dialogue spark creativity. In the AI era, this means building systems that connect people to knowledge sources rather than replacing them. When creating internal AI agents, attach the original creator’s name and direct seekers to them—turning a query into a mentorship opportunity. Publicly celebrate teams that use AI to amplify collaboration (e.g., "Agent X helped Team Y uncover insight Z by pointing them to Expert A"). Rotate roles so product managers prototype alongside designers, and engineers sit in on research syntheses. Host regular cross-functional panels debating how work is evolving—making change visible and participatory.
Third, leverage AI for connection, not just efficiency. Humor remains a powerful bonding agent, and AI can amplify it absurdly. Run "worst volume control" vibe-coding contests where teams deliberately create terrible solutions using new tools—learning through laughter. Generate AI-powered cliché twists (e.g., a motivational poster that says "Believe in yourself... but verify the data") as meeting icebreakers. These activities build psychological safety by creating shared, low-stakes joy—a reminder that behind every AI output are humans learning, failing, and growing together.
The question isn’t whether AI belongs in teams—it’s already embedded in workflows. The challenge is shaping its role to serve, not supplant, the human connections that make teams greater than the sum of their parts. As one engineering leader put it: "We optimized for speed and forgot we were building a community, not a factory." The most resilient teams will be those that use AI to handle the routine while fiercely protecting the space for the unexpected, the unclear, and the deeply human moments where trust is actually forged.

Comments
Please log in or register to join the discussion