crewailanggraphcomparison

CrewAI vs LangGraph: Search Grounding Compared

Side-by-side comparison of search tool integration in CrewAI and LangGraph with Scavio examples.

8 min

CrewAI gets a research agent with web search working in hours. LangGraph takes days but gives you fine-grained control over search result routing, retry logic, and state management. Both need an external search provider -- neither includes one. Here is how to add search grounding to each with side-by-side code.

CrewAI: Simplicity First

CrewAI is a high-level framework where you define agents with roles, goals, and tools, then let the framework handle orchestration. Adding a search tool means subclassing BaseTool and passing it to your agent. The agent decides when to search based on its goal and the task description. You do not control the search-then-reason flow -- CrewAI handles it.

CrewAI Search Integration

Python
import requests, os
from crewai import Agent, Task, Crew
from crewai.tools import BaseTool

H = {"x-api-key": os.environ["SCAVIO_API_KEY"]}

class WebSearch(BaseTool):
    name: str = "web_search"
    description: str = (
        "Search the web for current information. "
        "Returns top 5 results with titles and snippets."
    )

    def _run(self, query: str) -> str:
        resp = requests.post(
            "https://api.scavio.dev/api/v1/search",
            headers=H,
            json={"platform": "google", "query": query},
            timeout=10,
        )
        results = resp.json().get("organic_results", [])[:5]
        return "\n".join(
            f"- {r['title']}: {r['snippet']}" for r in results
        )

researcher = Agent(
    role="Market Researcher",
    goal="Find current, accurate data about market trends and pricing",
    backstory="You verify every claim with a web search",
    tools=[WebSearch()],
    llm="gpt-4o",
)

task = Task(
    description="Compare CRM pricing for startups in 2026",
    expected_output="A pricing comparison table with sources",
    agent=researcher,
)

crew = Crew(agents=[researcher], tasks=[task])
result = crew.kickoff()
print(result)

LangGraph: Control Every Step

LangGraph models agent workflows as state machines. You define nodes (search, reason, validate) and edges (conditional routing). This means you can add retry logic when search fails, route different query types to different search platforms, and validate results before passing them to the LLM. More code, but more production-ready.

LangGraph Search Integration

Python
import requests, os
from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, END

H = {"x-api-key": os.environ["SCAVIO_API_KEY"]}

class ResearchState(TypedDict):
    query: str
    search_results: list
    answer: str
    search_attempts: int

def search_node(state: ResearchState) -> dict:
    resp = requests.post(
        "https://api.scavio.dev/api/v1/search",
        headers=H,
        json={"platform": "google", "query": state["query"]},
        timeout=10,
    )
    results = resp.json().get("organic_results", [])[:5]
    return {
        "search_results": results,
        "search_attempts": state.get("search_attempts", 0) + 1,
    }

def should_retry(state: ResearchState) -> str:
    if not state["search_results"] and state["search_attempts"] < 3:
        return "retry"
    return "reason"

def reason_node(state: ResearchState) -> dict:
    context = "\n".join(
        f"- {r['title']}: {r['snippet']}"
        for r in state["search_results"]
    )
    # In production, call your LLM here with context
    return {"answer": f"Based on {len(state['search_results'])} sources: ..."}

graph = StateGraph(ResearchState)
graph.add_node("search", search_node)
graph.add_node("reason", reason_node)
graph.set_entry_point("search")
graph.add_conditional_edges("search", should_retry, {
    "retry": "search",
    "reason": "reason",
})
graph.add_edge("reason", END)

app = graph.compile()
result = app.invoke({"query": "crm pricing 2026", "search_results": [],
                     "answer": "", "search_attempts": 0})
print(result["answer"])

Side-by-Side Comparison

  • Lines of code: CrewAI ~30, LangGraph ~50
  • Time to first working agent: CrewAI hours, LangGraph days
  • Search retry logic: CrewAI -- no (agent decides), LangGraph -- yes (explicit)
  • Multi-platform routing: CrewAI -- manual tool selection, LangGraph -- conditional edges
  • State management: CrewAI -- implicit, LangGraph -- typed state dict
  • Debugging: CrewAI -- log agent reasoning, LangGraph -- inspect each node

When to Use Which

Use CrewAI when you need a research agent running this week and the task is straightforward: search, synthesize, output. Use LangGraph when you need production reliability: retries on search failure, routing queries to different platforms (Google for general, Reddit for community sentiment, Amazon for product data), and typed state that makes debugging reproducible. Both work with any search API. The framework choice is about orchestration complexity, not search capability.

Cost Is Identical

The search API cost is the same regardless of framework: $0.005 per Scavio query. The LLM cost depends on how much search context you feed to the model and how many reasoning steps you allow. CrewAI agents tend to make more LLM calls (internal reasoning loops) while LangGraph workflows make fewer, more deliberate calls. In practice, the difference is 10-20% in LLM token spend for equivalent tasks.