mcproutingagents

MCP Routing When 3 or More Servers Attached

Agents with 3+ MCP servers need explicit routing. Without it, the agent picks the wrong tool half the time.

5 min read

Agents that attach 3+ MCP servers (Scavio + Browserbase + custom database + Court Listener) need explicit routing logic. Without it, the agent calls the wrong tool roughly half the time on ambiguous prompts. With it, the correct-tool-pick rate climbs to 90%+.

The failure mode

Five MCP tools that all sound vaguely like "fetch web data": search, navigate, query_internal_db, court_listener_lookup, fetch_pdf. The agent reads the descriptions, picks one, sometimes correctly. Tool descriptions that are ambiguous about when each tool fires produce mostly random tool selection.

The fix in three parts

  1. Explicit naming. Tools start with the vendor or purpose: scavio_search,browserbase_navigate,company_db_query. Generic names likesearch or fetch get conflated.
  2. Routing prompt. A skill-level instruction that tells the agent when to use which tool, not just what each one does.
  3. Tool descriptions that double as routing signals.Each description explicitly says when not to use the tool.

The routing prompt

# tool-routing.md

This agent has tools from multiple MCP servers. Use the right one:

- For SERP, Reddit, YouTube, Amazon, Walmart, public articles:
  use Scavio MCP (scavio_search, scavio_reddit, scavio_extract).

- For auth-gated dashboards (Salesforce, internal SaaS, account-only
  data) or JS-only SPAs: use Browserbase MCP (navigate, click,
  fill_form).

- For internal company data (CRM records, ticket systems, account
  history): use Company DB MCP (db_query).

- For federal case law lookups: use Court Listener MCP.

If a target is publicly indexed, prefer Scavio over Browserbase.
If a question is about internal data, never escalate to Scavio.

The tool description pattern

Every tool description should answer: when do I use this, and when do I not?

JSON
{
  "name": "scavio_search",
  "description": "Live web search returning typed JSON with organic results, AI Overview citations, knowledge graph. Use for indexed public-web queries. Do NOT use for: internal company data, auth-gated dashboards, JS-only SPAs, federal case law (use court_listener instead)."
}

What scaling past 8 tools looks like

Inline routing in the skill prompt decays past about 7-8 tools. The agent loses track. The fix at that scale is an explicit router agent: a small first-pass LLM call (Haiku or DeepSeek) that picks the tool, then the main agent runs only the picked tool. Adds 100-200ms of latency; correctness climbs to 95%+.

Why this matters more in 2026

Hosted MCP servers are now common. Builders attach 3-5 without a second thought because each adds capability. Without routing, the agent gets worse with each addition, not better. Bad routing is the single biggest reason capable agents fail in production.

Verification

Log every tool call for the first 20-50 sessions. Review: did the agent pick the right tool? If not, why? Tighten the routing prompt. Re-run. Most teams iterate 3-5 times before the routing settles. The investment pays off because the agent stays accurate as more MCP servers get added.

The honest constraint

Tool routing is prompt engineering, not a solved problem. Different LLMs route differently; Claude Sonnet 4.6 routes better than Haiku 4.5; both improve every quarter. The routing prompt that worked in Q1 may need tuning by Q3. Treat routing as living code, not a one-time setup.

What ships in week one

Three MCP servers attached, routing prompt written and verified on 20 sample queries. Tool-call log dashboard so the team can spot wrong-routing in the next sessions and tighten the prompt.