The Most Stable Stack for a Claude-Powered Job Search Agent
Building a reliable job search agent with Claude, LangGraph, and real-time SERP data. Architecture decisions, tool selection, and deployment patterns.
Job search agents built on LLMs tend to break in the same place: the data layer. The model is fine. The orchestration logic is fine. But the moment you try to pull live job listings from Google, you hit CAPTCHAs, rate limits, and HTML that changes every week. This post covers a stable stack for a Claude-powered job search agent that uses Scavio for the data layer so you can focus on the agent logic.
The Architecture
The stack has three layers. Claude handles reasoning and user interaction. A thin orchestration layer (Python or TypeScript) manages the loop. Scavio handles every search query against Google, returning structured JSON with job listings, company info, and salary data when available.
This separation matters because it means your agent never touches raw HTML. When Google changes their layout -- and they do, often -- your agent keeps working because Scavio handles the parsing upstream.
Fetching Job Listings
A single POST to Scavio's Google search endpoint returns structured job data. Here's a function that searches for jobs by role and location:
import requests
def search_jobs(role: str, location: str, api_key: str) -> dict:
resp = requests.post(
"https://api.scavio.dev/api/v1/search",
headers={"x-api-key": api_key},
json={
"platform": "google",
"query": f"{role} jobs in {location}",
"type": "search",
"mode": "full"
}
)
return resp.json()The response includes organic results, knowledge graph data, and -- when Google surfaces them -- dedicated job listing cards with employer, title, and application links.
Building the Agent Loop
The agent loop is straightforward. Claude receives a user query like "find me senior backend roles in Austin under $200k." The orchestrator calls Scavio, feeds the structured results to Claude, and Claude filters, ranks, and summarizes.
from anthropic import Anthropic
client = Anthropic()
def run_agent(user_query: str, api_key: str):
# Step 1: Extract search parameters with Claude
extraction = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=256,
messages=[{"role": "user", "content": f"Extract role and location from: {user_query}"}]
)
# Step 2: Search via Scavio
results = search_jobs("senior backend engineer", "Austin TX", api_key)
# Step 3: Claude filters and ranks
analysis = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=2048,
messages=[{
"role": "user",
"content": f"Filter these jobs for: {user_query}\n\n{results}"
}]
)
return analysis.content[0].textHandling Pagination and Freshness
Job listings go stale fast. A posting from two weeks ago might already be filled. Scavio supports date-range filtering through the query itself -- append after:2026-04-01 to your search query to limit results to recent postings. For pagination, pass the page parameter to fetch additional result pages.
def search_recent_jobs(role: str, location: str, api_key: str, page: int = 1):
resp = requests.post(
"https://api.scavio.dev/api/v1/search",
headers={"x-api-key": api_key},
json={
"platform": "google",
"query": f"{role} jobs in {location} after:2026-04-01",
"type": "search",
"page": page
}
)
return resp.json()What the Agent Actually Delivers
A well-built job search agent doesn't just return a list of links. It should:
- Filter results by salary range, seniority, and remote/onsite
- Deduplicate listings that appear across multiple job boards
- Summarize company info pulled from knowledge graph data
- Track new postings over time and alert the user
Claude handles all of this reasoning. Scavio handles the data. The orchestrator just connects the two. That's the entire stack.
Why This Stack Holds Up
Most job search bots break within weeks because they depend on scraping. This stack avoids that entirely. Scavio returns structured JSON from a maintained API. Claude handles the intelligence layer. The orchestrator is thin enough that there's almost nothing to maintain. If you are building a job search agent, start here -- get the data layer right first, then build the agent logic on top.