mcpcrmagents

CRM Context via MCP for Agent Search Workflows

Adding customer context to AI agents via MCP. CRMy pattern: CRM data plus search enrichment for context-aware agent responses.

6 min read

AI agents that answer customer questions without knowing who the customer is give generic responses. Adding CRM context — company size, plan tier, past tickets, industry — turns a chatbot into a context-aware agent. The MCP pattern connects your CRM data to the agent and adds search enrichment for questions the CRM cannot answer.

The problem with context-blind agents

A customer on your enterprise plan asks about API rate limits. A context-blind agent gives the generic answer from your docs. A context-aware agent knows they are on the enterprise plan, sees they hit rate limits three times last week, and responds with their specific limit (50K/day) and a suggestion to contact their account manager for an increase. Same question, completely different value.

Most teams know this but stall on implementation because wiring CRM data into an LLM prompt feels like building a custom integration from scratch. MCP (Model Context Protocol) standardizes this.

The CRMy pattern

CRMy is not a product — it is a pattern. The agent has access to two tool types: a CRM lookup tool that pulls customer data, and a search tool that fills knowledge gaps the CRM does not cover. When a customer asks a question, the agent first pulls their CRM profile, then uses search to find current information relevant to their specific context.

Python
# MCP tool definitions for the CRMy pattern
tools = [
    {
        "name": "crm_lookup",
        "description": (
            "Look up customer profile by email or ID. "
            "Returns: name, company, plan, past tickets, "
            "industry, account manager."
        ),
        "input_schema": {
            "type": "object",
            "properties": {
                "customer_id": {"type": "string"},
                "email": {"type": "string"}
            }
        }
    },
    {
        "name": "web_search",
        "description": (
            "Search the web for current information. "
            "Use when the CRM data alone cannot answer "
            "the customer question."
        ),
        "input_schema": {
            "type": "object",
            "properties": {
                "query": {"type": "string"},
                "platform": {"type": "string"}
            },
            "required": ["query"]
        }
    }
]

Implementing the search enrichment layer

The search tool handles questions the CRM cannot: current pricing of a competitor the customer mentioned, status of an industry regulation, or the latest version of a third-party integration. Without search, the agent either hallucinates or says "I don't know." With search, it finds the answer and cites the source.

Python
import requests

def handle_agent_search(query: str, customer_context: dict) -> dict:
    """Search enriched with customer context."""
    # Tailor the search query using CRM data
    industry = customer_context.get("industry", "")
    enriched_query = f"{query} {industry}".strip()

    resp = requests.post(
        "https://api.scavio.dev/api/v1/search",
        headers={"x-api-key": "YOUR_API_KEY"},
        json={
            "query": enriched_query,
            "platform": "google",
            "num_results": 5
        }
    )
    results = resp.json().get("results", [])

    return {
        "query": enriched_query,
        "sources": [
            {"title": r["title"], "url": r["url"],
             "snippet": r["snippet"]}
            for r in results
        ]
    }

# Customer in healthcare asks about compliance
customer = {"industry": "healthcare", "plan": "enterprise"}
answer = handle_agent_search("HIPAA data retention", customer)
for s in answer["sources"]:
    print(f"{s['title']}: {s['url']}")

The system prompt that ties it together

The agent needs clear instructions on when to use which tool. CRM lookup always runs first. Search runs when the CRM data plus your knowledge base does not fully answer the question.

Text
System prompt:
You are a customer support agent. For every conversation:

1. ALWAYS call crm_lookup first with the customer ID.
2. Use the customer profile to personalize your response:
   - Reference their plan tier and its specific limits
   - Acknowledge past tickets if relevant
   - Use their industry context for examples
3. If the question involves current external information
   (competitor pricing, regulations, third-party tools),
   call web_search. Cite the source URL.
4. Never guess at plan-specific details. If the CRM data
   does not include it, say "Let me check with your
   account manager" and flag for escalation.

What this costs

The CRM lookup is internal — no API cost. The search enrichment runs only when needed, maybe 30% of conversations. At $0.005/search query and 1,000 conversations/month, you spend roughly $1.50/mo on search. The LLM inference cost dominates — but that is the same whether you add context or not. The marginal cost of context-awareness is negligible.

Tradeoffs and caveats

Adding CRM data to the LLM context means customer data flows through the model. If you use a hosted LLM, that data leaves your infrastructure. For regulated industries, this may require a local model or a provider with a BAA. The search enrichment queries are less sensitive — they are just web searches — but the CRM data itself needs careful handling. Evaluate your compliance requirements before piping customer records into any third-party API.