langgraphmemorysearch-api

LangGraph Memory Meets Search Grounding

LangGraph agents with memory but no search serve stale facts confidently. Adding search verification for time-sensitive recalled data.

8 min

LangGraph agents with persistent memory but no web search operate on stale context -- they remember what you told them but cannot verify whether it is still true. Adding search grounding to a memory-equipped LangGraph agent means the agent checks current web data before acting on recalled facts, eliminating the "confidently wrong from memory" failure mode.

The memory-only failure pattern

A LangGraph agent with checkpointer memory recalls that "SerpAPI costs $50/mo for 5K searches." That was true in 2024. In 2026, SerpAPI charges $75/mo for 5K searches. Without search grounding, the agent serves stale pricing as fact. Memory provides continuity. Search provides currency.

Architecture: memory + search nodes

Python
from langgraph.graph import StateGraph, END
from langgraph.checkpoint.memory import MemorySaver
from typing import TypedDict, Annotated
import operator, os, requests

class AgentState(TypedDict):
    messages: Annotated[list, operator.add]
    memory_context: str
    search_context: str
    needs_verification: bool

SCAVIO_KEY = os.environ["SCAVIO_API_KEY"]

def recall_memory(state: AgentState) -> AgentState:
    # Retrieve relevant memory from checkpoint
    recalled = state.get("memory_context", "")
    needs_check = any(
        term in recalled.lower()
        for term in ["price", "cost", "version", "release", "update"]
    )
    return {"needs_verification": needs_check}

def search_verify(state: AgentState) -> AgentState:
    query = state["messages"][-1] if state["messages"] else ""
    resp = requests.post(
        "https://api.scavio.dev/api/v1/search",
        headers={"x-api-key": SCAVIO_KEY},
        json={"query": str(query), "num_results": 5,
              "include_ai_overview": True},
    )
    data = resp.json()
    snippets = [r["snippet"] for r in data.get("organic_results", [])[:3]]
    return {"search_context": "\n".join(snippets)}

def should_verify(state: AgentState) -> str:
    if state.get("needs_verification", False):
        return "search"
    return "respond"

def respond(state: AgentState) -> AgentState:
    context = state.get("search_context") or state.get("memory_context", "")
    return {"messages": [f"Grounded response using: {context[:200]}"]}

graph = StateGraph(AgentState)
graph.add_node("recall", recall_memory)
graph.add_node("search", search_verify)
graph.add_node("respond", respond)

graph.set_entry_point("recall")
graph.add_conditional_edges("recall", should_verify,
    {"search": "search", "respond": "respond"})
graph.add_edge("search", "respond")
graph.add_edge("respond", END)

memory = MemorySaver()
app = graph.compile(checkpointer=memory)

When to trigger search vs trust memory

Not every recalled fact needs verification. The routing heuristic:

  • Prices, versions, dates -- always verify via search
  • User preferences, project context -- trust memory
  • Company names, product features -- verify if older than 7 days
  • API endpoints, documentation URLs -- verify if action depends on it

Cost of the verification layer

If 30% of agent interactions trigger search verification and you handle 1,000 conversations/day, that is 300 search queries/day or ~9K/month. At Scavio $0.005/credit: $45/month. At Tavily $0.008/query: $72/month. At SerpAPI $0.015/search: $135/month. The verification layer is cheap insurance against stale-memory hallucinations.

Testing the combined pipeline

Python
# Test with a pricing question that memory might have stale data for
config = {"configurable": {"thread_id": "user-123"}}

result = app.invoke(
    {"messages": ["What does SerpAPI cost in 2026?"],
     "memory_context": "SerpAPI costs $50/mo for 5K searches",
     "search_context": "", "needs_verification": False},
    config=config,
)
# Agent recalls $50/mo from memory, flags as price-related,
# routes to search, finds current $75/mo pricing, responds accurately
print(result["messages"][-1])

Key takeaway

Memory and search are complementary, not competing. Memory gives your agent continuity across sessions. Search gives it accuracy for time-sensitive facts. The conditional routing pattern costs almost nothing and prevents the most embarrassing agent failure mode: confidently quoting outdated information.