Can a support agent push back when the customer is wrong — politely — and still maintain satisfaction?
Every support bot is trained to be agreeable. But sometimes the customer is wrong — they're using the product incorrectly, misunderstanding a feature, or requesting something that would hurt their own outcome. This experiment builds a support agent that can respectfully disagree, explain why, and guide the user toward a better solution — even when that means saying no.
The agent chooses between agreement and respectful disagreement based on factual assessment.
Can a support agent push back when the customer is wrong — politely — and still maintain satisfaction?
Every support bot is trained to be agreeable. But sometimes the customer is wrong — they’re using the product incorrectly, misunderstanding a feature, or requesting something that would hurt their own outcome. This experiment builds a support agent that can respectfully disagree, explain why, and guide the user toward a better solution — even when that means saying no.
The agent chooses between agreement and respectful disagreement based on factual assessment.
Honesty with empathy outperforms empty agreement — the “empathy sandwich” (empathy → evidence → alternative) is the reliable pattern.
Confident disagreement requires deep product knowledge — vague pushback damages trust faster than saying nothing.
Some disagreements should always escalate to a human — the agent needs clear boundaries on what it can push back on.
The highest-value correction is catching misuse early — redirecting to the correct workflow before frustration compounds.
We’re exploring this pattern for Relay’s customer-facing workflow templates. The “diplomatic disagreement” module could become a reusable component for any support automation built on the platform.