Customer Support Agents: The Answering vs Doing Gap
Support AI agents are great at FAQ answers but fail when customers need actions. The practical architecture: AI answers + human-in-the-loop actions.
Most customer support AI agents are great at retrieving and summarizing knowledge base articles but fall apart when the customer needs something done: a refund, an account change, an escalation. The answering vs doing gap is the real issue, and trying to automate both at once usually means neither works well.
Why Knowledge-Only Agents Hit a Wall
A support agent that can only search and summarize hits its limit within 30 seconds of a typical customer interaction. The customer asks "how do I reset my password?" -- the agent answers well. The customer then says "I tried that and it did not work, can you reset it for me?" -- the agent loops back to the same KB article. This loop is where CSAT scores drop and customers escalate to human agents anyway.
The Practical Architecture
The fix most teams land on: keep the text agent for FAQ-style questions and wire it to live data so answers are current, then have human-in-the-loop for anything that requires a write action. This split is not a failure of AI -- it is the architecture that actually works in production.
- Read operations (search KB, check status, look up info): AI agent handles these directly
- Write operations (refund, account change, escalation): AI drafts the action, human approves
- Live data questions (current pricing, outage status, policy changes): AI searches the web for current info
Adding Live Web Search to the Knowledge Layer
The knowledge base goes stale. Products change pricing, features get deprecated, and policies update. When a customer asks about something the KB does not cover, the agent should search the web for current information rather than admitting it does not know or guessing.
import requests, os
H = {"x-api-key": os.environ["SCAVIO_API_KEY"]}
def support_fallback_search(question, product=""):
"""Search web when KB has no answer."""
query = f"{product} {question}" if product else question
google = requests.post("https://api.scavio.dev/api/v1/search",
headers=H, json={"platform": "google", "query": query},
timeout=10).json()
reddit = requests.post("https://api.scavio.dev/api/v1/search",
headers=H, json={"platform": "reddit", "query": query},
timeout=10).json()
results = []
for r in google.get("organic", [])[:3]:
results.append({
"source": "official", "title": r.get("title"),
"snippet": r.get("snippet"), "url": r.get("link")
})
for r in reddit.get("organic", [])[:2]:
results.append({
"source": "community", "title": r.get("title"),
"url": r.get("link")
})
return resultsGoogle for Official Answers, Reddit for Workarounds
The Google + Reddit combination is particularly effective for support agents. Google returns official documentation and help articles. Reddit surfaces community workarounds that official docs often miss: the specific browser setting that fixes the issue, the undocumented API parameter, the sequence of steps that actually works when the official steps do not.
The key is presenting both sources with clear attribution. "According to our help docs, you should..." for Google results. "Other users have found that..." for Reddit results. This gives the customer both the official path and the practical path, which reduces follow-up tickets.