multi-agentsearch-apilanggraph

Multi-Agent Systems Need Search as a Core Tool

DeerFlow, LangGraph, CrewAI all need search. Compare tool definitions and provider options.

8 min

Every multi-agent framework -- LangGraph, CrewAI, DeerFlow, AutoGen -- needs a search tool, and most ship with Tavily or Serper as the default. Swapping in a multi-platform search API gives agents access to Google, Amazon, YouTube, Reddit, Walmart, and Google Shopping through one tool definition instead of six.

The Default Search Problem

LangGraph defaults to Tavily. CrewAI defaults to Serper. DeerFlow ships with a built-in search node using Tavily. These defaults work for demos but limit production agents: Tavily returns its own index (not Google), and Serper returns Google only. An agent researching product pricing cannot check Amazon. An agent analyzing community sentiment cannot search Reddit. You end up registering 3-4 search tools, each with its own API key and response schema.

LangGraph: Custom Tool Definition

Python
from langchain_core.tools import tool
import requests, os

API_KEY = os.environ["SCAVIO_API_KEY"]

@tool
def search(query: str, platform: str = "google") -> dict:
    """Search across Google, Amazon, YouTube, Reddit, Walmart,
    or Google Shopping. Set platform to target a specific source."""
    resp = requests.post(
        "https://api.scavio.dev/api/v1/search",
        headers={"x-api-key": API_KEY,
                 "Content-Type": "application/json"},
        json={"platform": platform, "query": query}
    )
    data = resp.json()["data"]
    results = data.get("organic", [])[:5]
    return [{"title": r["title"], "url": r["link"],
             "snippet": r.get("snippet", "")} for r in results]

# Use in a LangGraph agent
from langgraph.prebuilt import create_react_agent
from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(model="claude-sonnet-4-20250514")
agent = create_react_agent(llm, tools=[search])

CrewAI: Custom Tool Class

Python
from crewai.tools import BaseTool
from pydantic import Field
import requests, os

class MultiPlatformSearch(BaseTool):
    name: str = "multi_platform_search"
    description: str = (
        "Search Google, Amazon, YouTube, Reddit, Walmart, "
        "or Google Shopping. Use the platform parameter to "
        "specify the source."
    )

    def _run(self, query: str, platform: str = "google") -> str:
        resp = requests.post(
            "https://api.scavio.dev/api/v1/search",
            headers={"x-api-key": os.environ["SCAVIO_API_KEY"],
                     "Content-Type": "application/json"},
            json={"platform": platform, "query": query}
        )
        results = resp.json()["data"].get("organic", [])[:5]
        return "\n".join(
            f"- {r['title']}: {r['link']}" for r in results
        )

search_tool = MultiPlatformSearch()

DeerFlow / AutoGen: Function Calling

Python
# DeerFlow and AutoGen both support raw function tools
import requests, os

def search_web(query: str, platform: str = "google") -> list:
    """Search multiple platforms through one API.
    Platforms: google, amazon, youtube, reddit, walmart, google_shopping
    """
    resp = requests.post(
        "https://api.scavio.dev/api/v1/search",
        headers={"x-api-key": os.environ["SCAVIO_API_KEY"],
                 "Content-Type": "application/json"},
        json={"platform": platform, "query": query}
    )
    return resp.json()["data"].get("organic", [])[:5]

# AutoGen example
# assistant.register_for_llm(name="search")(search_web)
# user_proxy.register_for_execution(name="search")(search_web)

Why One Tool Beats Many

Every tool registered with an agent consumes context tokens. A tool schema is roughly 200-400 tokens depending on description length. Registering separate tools for Google, Amazon, YouTube, Reddit, and Walmart costs 1,000-2,000 tokens of context on every turn. A single multi-platform tool with a platform parameter costs 300 tokens. Over a 20-turn agent session, that saves 14,000-34,000 tokens.

Side-by-Side: Default vs Multi-Platform

Python
# Default setup: 3 tools, 3 API keys, 3 response schemas
# tools = [TavilySearch(), SerperSearch(), custom_reddit_scraper]
# Token cost: ~900 tokens/turn for tool schemas
# APIs to maintain: 3
# Total monthly cost at 5K queries:
#   Tavily: $40 + Serper: $50 + Reddit scraper hosting: $20 = $110

# Multi-platform setup: 1 tool, 1 API key, 1 response schema
# tools = [search]  # handles all platforms
# Token cost: ~300 tokens/turn for tool schemas
# APIs to maintain: 1
# Total monthly cost at 5K queries:
#   Scavio $30/month plan (7K credits included)

# The agent decides which platform to search based on the task:
# "Find product reviews" -> platform="youtube"
# "Check current prices" -> platform="amazon"
# "Find community opinions" -> platform="reddit"
# "General web search" -> platform="google"

MCP Alternative

If your framework supports MCP (Claude Code, Cursor, Windsurf, VS Code with Copilot), you can skip custom tool definitions entirely. The MCP server registers the search tool automatically:

JSON
{
  "mcpServers": {
    "scavio": {
      "type": "http",
      "url": "https://mcp.scavio.dev/mcp",
      "headers": {
        "x-api-key": "YOUR_SCAVIO_KEY"
      }
    }
  }
}

Choosing the Right Default

Keep Tavily if: you only need web search and your framework defaults to it. Keep Serper if: you only need Google and run 10K+ queries/month ($0.10/1K is unbeatable for Google-only volume). Switch to Scavio if: your agents need to search products, videos, shopping, or community discussions -- not just web pages. At $0.005/credit across 6 platforms, the per-platform cost is under $0.001.