FinStrat Insights

Bots Fight at Work

How AI-assisted messaging is replacing human conversation, and why a simple phone call still wins.

A quiet but growing dysfunction is spreading across modern workplaces, one that hides behind the appearance of productivity. Employees are responding to emails and chat messages with AI-generated text. They are pasting conversation threads into tools such as ChatGPT or Claude, receiving polished-sounding replies and sending those replies without fully understanding the underlying issue. The result is a new kind of workplace conflict: the bot fight.

Two or more participants in a discussion are, effectively, no longer talking to each other. They are feeding a machine, receiving output and lobbing that output across a digital divide. Nuance evaporates. Context collapses. And what could have been resolved with a two-minute phone call stretches into an hours-long thread of confusion.

This is not a hypothetical problem. It is playing out in Slack channels, Microsoft Teams threads and email inboxes every day. The antidote is remarkably simple and remarkably human.

Consider a scenario familiar to anyone who has worked in a mid-to-large organization. Party A, a project lead in one department, needs a specific set of data from Party B, a team in a separate department. Party A reaches out via Slack, requesting the information for an ongoing project. Party B responds professionally, asking a handful of clarifying questions: What format is needed? What time period does the data cover? Has this been requested before?

These are not obstacles. They are the normal, responsible questions of a team that wants to deliver accurate information. Party A responds and the two parties begin working through the details. At one point, Party B notes that a similar data set was prepared for a related project several months earlier and suggests that with minor modifications it could serve Party A’s current needs. It is a constructive, efficient proposal.

Party C is a well-meaning employee from a third department who has been peripherally aware of the project. Seeing the back-and-forth in the channel, he decides to step in and help. He does not have the full history of the project. He was not part of the earlier conversations between Party A and Party B. He has not read through the thread carefully.

What he sees, through a very incomplete lens, looks like resistance: Party B asking questions, revisions being proposed and a continuing exchange. So Party C does what has become increasingly common. He copies the conversation into Claude and asks it to generate a response that will move things forward.
The AI obliges. It produces something that sounds authoritative and professional. Party C pastes it into the Slack channel.

The response misses the mark. It addresses questions that were already answered. It ignores the nuanced discussion about reusing prior work. It introduces assumptions about the project scope that are simply incorrect. Rather than accelerating the conversation, it muddies it.

Party B, who has been engaged in a perfectly reasonable and productive dialogue with Party A, now recognizes that the conversation has gone sideways. The channel has become noisy. There is a new voice introducing irrelevant points, and that voice sounds confident. Rather than continuing to type into an increasingly cluttered thread, Party B makes a simple suggestion: Let’s get on a call.

Before Party A can even respond to the suggestion, Party C is back. He has returned to Claude with an updated prompt, fed it the latest messages and generated another response. This one, again, is polished in tone but disorienting in substance. It attempts to summarize a conversation it does not understand, proposes a path forward that neither Party A nor Party B had discussed and adds yet another layer of confusion to an exchange that was, before Party C’s arrival, heading toward a resolution.

‘The chat thread has become a performance, not a conversation.’

This is the bot fight in its purest form. No one is communicating anymore. There is a human typing on one end and an AI generating text on the other, but the actual exchange of meaning, the understanding of requirements, constraints, history and intent, has broken down completely.

Party B, recognizing that another round of text exchanges will only deepen the noise, decides not to wait for a scheduled call. He picks up the phone and calls Party A directly.

The call is short. Within the first 30 seconds, both parties confirm what they already suspected: They were in agreement. Party A’s requirements are clear. Party B’s earlier proposal to adapt the prior data set is not only acceptable but preferred. The apparent back-and-forth that Party C had mistaken for conflict was simply two professionals doing their due diligence.

In the space of a brief, direct conversation, the ambiguity is cleared. The scope is confirmed. The timeline is set. Party A and Party B hang up having accomplished more in a few minutes than the entire Slack thread had managed in the preceding hour. The matter is resolved not because the AI generated better output, but because two human beings spoke to each other.

Party C, meanwhile, remains unaware that his contributions created friction rather than resolved it.

The issue here is not AI. Tools such as ChatGPT and Claude are genuinely useful when applied with context, intent and subject-matter knowledge. They can help draft communications, summarize lengthy documents, brainstorm solutions and accelerate research. The problem arises when they are used as a substitute for understanding, when a person who does not grasp the full picture of a situation uses an AI tool to generate a response that sounds like he does.

AI language models are trained to produce fluent, coherent and confident-sounding text. They are very good at this. But fluency is not the same as accuracy, and confidence is not the same as correctness. When Party C fed the Slack conversation to Claude, the tool had no way of knowing the history between Party A and Party B, the nature of the prior project or the fact that the conversation was already moving toward resolution. It generated text that fit the surface pattern of the conversation without understanding its substance.

This is what creates bot fights. Confident-sounding, context-free text is injected into conversations where real understanding is required. Each AI-generated message demands a response. Each response, if also AI-generated or AI-assisted without true comprehension, adds another layer of well-phrased confusion. The thread grows. The resolution recedes.

There is also a subtler damage: trust. When colleagues receive AI-generated responses that miss the mark, they begin to sense that they are not actually being heard. The appearance of engagement, the professional phrasing, the prompt response time, masks an absence of genuine attention. Over time, this erodes the collaborative relationships that make organizations function.

The lesson from this scenario is not that AI tools should be banned from the workplace, or that Slack is an inferior communication platform. The lesson is that no tool, however sophisticated, can replace the subject expertise, context and relational intelligence that a human being brings to a conversation he is invested in.

When Party C decided to intervene, the right move would have been to read the thread carefully, acknowledge the limits of his own knowledge and either ask a clarifying question or simply let the people closest to the work handle it. Instead, he outsourced his participation to a machine. The machine performed its function. Humans suffered the consequences.

Party B, by contrast, demonstrated exactly the right instinct. When written communication is creating more heat than light, stop writing and start talking. A phone call is not a step backward in a digital workplace. It is often the most efficient tool available. It transmits tone, confirms understanding in real time, allows for immediate course correction and crucially, it ends. There is no thread to scroll through, no notifications to manage and no AI-generated non-sequiturs to respond to.

The next time you find yourself in a workplace chat thread that is growing longer and less productive, consider two questions before typing your next response. First: Do I actually understand this situation well enough to add value here? Second: Would a direct conversation resolve this faster? If the honest answer to the first is no, or the honest answer to the second is yes, put down the keyboard. Pick up the phone. Be a human. It still works.