Replace Your 5-Step Lead Enrichment with Claude MCP
Turn lead enrichment from a manual multi-tool process into an agent conversation. Claude Code + MCP search replaces Apollo + LinkedIn + manual Googling.
The traditional 5-step lead enrichment pipeline — find, scrape, verify, enrich, score — can collapse into 1–2 MCP tool calls when you connect Claude Code to a search API like Scavio. Instead of chaining five separate tools with their own APIs, rate limits, and failure modes, you let the LLM orchestrate a single search call and extract everything in one pass.
The 5-step pipeline everyone builds
A thread on r/ClaudeWorkflows described the standard B2B lead enrichment workflow that every growth team builds some version of:
- Step 1: Find — Search for companies matching ICP criteria (industry, size, location)
- Step 2: Scrape — Visit each company website to extract contact info, tech stack, company description
- Step 3: Verify — Check if the company is real, active, and matches claimed attributes
- Step 4: Enrich — Add data from secondary sources (LinkedIn, Crunchbase, job postings)
- Step 5: Score — Rank leads by fit and intent signals
Each step typically uses a different tool: Serper or Scavio for find, Firecrawl or ScrapingBee for scrape, a custom validator for verify, Clearbit or Apollo for enrich, and a scoring model or heuristic for score. That is five API keys, five rate limits, five failure modes, and a pipeline that breaks whenever one step changes its API.
The MCP collapse
With Claude Code connected to Scavio MCP, steps 1 through 4 can happen in a single conversation turn. You describe your ICP to Claude, it calls Scavio search to find matching companies, reads the SERP snippets and metadata to extract company info, and uses follow-up searches to verify and enrich. The LLM handles the orchestration that previously required a pipeline framework.
# Traditional 5-step pipeline (simplified)
import requests
def old_pipeline(icp_query: str):
# Step 1: Find
search_resp = requests.post(
"https://api.scavio.dev/api/v1/search",
headers={"x-api-key": "YOUR_KEY"},
json={"platform": "google", "query": icp_query, "num_results": 20}
)
companies = search_resp.json().get("organic_results", [])
# Step 2: Scrape (separate tool, separate API key)
# scraped = [scraping_tool.scrape(c["link"]) for c in companies]
# Step 3: Verify (custom logic)
# verified = [c for c in scraped if is_valid_company(c)]
# Step 4: Enrich (another API)
# enriched = [enrichment_api.enrich(c["domain"]) for c in verified]
# Step 5: Score (model or heuristic)
# scored = sorted(enriched, key=lambda x: score(x), reverse=True)
return companies # Each step adds complexity and failure riskWith MCP, you replace this entire pipeline with a Claude Code session:
# In Claude Code with Scavio MCP configured:
# "Find 20 SaaS companies in the HR tech space with 50-200 employees,
# series A or B funded, based in the US. For each, give me the domain,
# company description, estimated team size, and a fit score 1-10 for
# selling developer tools."
# Claude will:
# 1. Call Scavio search with multiple query variations
# 2. Extract company info from SERP snippets
# 3. Run follow-up searches for team size and funding data
# 4. Score each lead based on the signals found
# 5. Return a structured tableWhat actually happens in the MCP call
When Claude Code calls Scavio MCP, the search returns structured data: titles, URLs, snippets, and any additional SERP features (People Also Ask, Knowledge Panel data). Claude reads this structured data and extracts company signals directly from the snippets and titles — without needing to scrape each page.
For enrichment, Claude makes additional targeted searches: "[company name] team size employees" or "[company name] series A funding". Each search costs one Scavio credit ($0.005). A full enrichment of 20 companies might require 40–60 search calls: $0.20–$0.30 total.
When the collapse works
- Small batches: 10–50 companies per run. Claude can handle this in a single session without losing context.
- Exploratory research: When you are still defining your ICP and want to iterate quickly on criteria.
- One-off enrichment: A list of 20 domains from a conference that need quick qualification.
When you still need the pipeline
- Volume: 500+ companies per run. Claude Code sessions have context limits and the cost of LLM tokens at this scale exceeds the cost of a traditional pipeline.
- Deep scraping: When you need data from inside the target website (pricing page content, specific product features) that does not appear in SERP snippets.
- Automated scheduling: Daily/weekly enrichment runs that need to execute without human oversight.
The hybrid approach
The practical setup for most teams: use Claude MCP for the initial discovery and qualification (steps 1–3), then pipe the qualified leads into a traditional enrichment step (step 4) and scoring model (step 5). This gets you the speed and flexibility of MCP for the messy, judgment-heavy early stages while keeping a deterministic pipeline for the structured late stages.
import requests
def mcp_discovery_step(icp_description: str) -> list:
"""
Use Scavio API for the find + initial qualify steps.
In practice, this runs inside Claude Code via MCP.
"""
queries = [
f"{icp_description} companies",
f"{icp_description} startups funded",
f"{icp_description} hiring engineers"
]
domains = set()
for q in queries:
resp = requests.post(
"https://api.scavio.dev/api/v1/search",
headers={"x-api-key": "YOUR_KEY"},
json={"platform": "google", "query": q, "num_results": 20}
)
for item in resp.json().get("organic_results", []):
link = item.get("link", "")
if "://" in link:
domain = link.split("/")[2].replace("www.", "")
domains.add(domain)
return list(domains)
# MCP handles discovery, traditional pipeline handles enrichment
discovered = mcp_discovery_step("HR tech SaaS series A B US")
print(f"Discovered {len(discovered)} domains for enrichment pipeline")Cost comparison
- Traditional pipeline: Serper ($50/month) + Firecrawl ($20/month) + Clearbit ($99/month minimum) + engineering time to maintain integrations = $170+/month plus maintenance.
- MCP approach: Scavio ($30/month for 7,000 credits) + Claude Code usage. No additional tools for scraping or enrichment at small scale.
- Hybrid: Scavio for discovery + one enrichment API for deep data. $30–$130/month depending on enrichment provider.
Bottom line
The 5-step lead enrichment pipeline was built for a world without LLM orchestration. With Claude Code and a search MCP, the first three steps collapse into a conversation. The last two steps stay as structured code when you need determinism and scale. Start with the MCP approach, add pipeline stages only when volume demands it.