Overview
Daily agent run that pulls fresh jobs from ATS subdomains, Reddit hiring threads, and LinkedIn-shaped queries; extracts full descriptions; scores against the user's resume.
Trigger
Daily 7 AM
Schedule
Daily 7 AM
Workflow Steps
Load user resume + preferences
Skills, location, salary range, remote preference.
ATS-scoped SERP queries
site:greenhouse.io, site:lever.co, site:ashbyhq.com.
Reddit hiring thread query
r/cscareerquestions, r/jobs, niche subs.
Extract full JD per top candidate
Scavio extract returns markdown.
LLM score 0-100 vs resume
Claude or GPT, 1-line reason.
Daily ranked email
Top 30 with scores and links.
Python Implementation
import os, requests
API_KEY = os.environ['SCAVIO_API_KEY']
H = {'x-api-key': API_KEY}
def daily(user):
skills = ' '.join(user['skills'])
ats = []
for d in ['greenhouse.io', 'lever.co', 'ashbyhq.com']:
r = requests.post('https://api.scavio.dev/api/v1/search', headers=H,
json={'query': f'site:{d} {skills} {user["location"]}'}).json()
ats += r.get('organic_results', [])[:10]
rdt = requests.post('https://api.scavio.dev/api/v1/reddit/search', headers=H,
json={'query': f'{skills} hiring 2026'}).json().get('posts', [])[:10]
return ats + rdtJavaScript Implementation
const H = { 'x-api-key': process.env.SCAVIO_API_KEY, 'Content-Type': 'application/json' };
async function daily(user) {
const skills = user.skills.join(' ');
const ats = [];
for (const d of ['greenhouse.io', 'lever.co']) {
const r = await fetch('https://api.scavio.dev/api/v1/search', { method: 'POST', headers: H, body: JSON.stringify({ query: `site:${d} ${skills}` }) }).then(r => r.json());
ats.push(...(r.organic_results || []));
}
return ats;
}Platforms Used
Web search with knowledge graph, PAA, and AI overviews
Community, posts & threaded comments from any subreddit