Overview
This workflow monitors competitor brand keywords across Google SERP every day. It tracks organic position changes, detects new content from competitors (blog posts, landing pages, press releases), and includes Reddit mention counts. The digest email arrives before your morning standup with actionable insights.
Trigger
Cron schedule (daily at 6:30 AM UTC)
Schedule
Daily at 6:30 AM UTC
Workflow Steps
Load competitor brand keywords
Read competitor names and branded keywords from config. Include variations like 'CompetitorName review', 'CompetitorName pricing', 'CompetitorName vs'.
Search Google for each keyword
Call Scavio Google search for each branded keyword. Record the top 10 organic positions with titles and URLs.
Diff against yesterday
Compare today's top 10 against yesterday's stored data. Identify new URLs (new content), position changes, and disappeared results.
Check Reddit mentions
Search Reddit for each competitor name via Scavio. Count threads from the last 24 hours to gauge buzz.
Email digest
Format all changes into a concise email digest grouped by competitor. Highlight new content and significant position changes.
Python Implementation
import requests, os, json
from pathlib import Path
SCAVIO_KEY = os.environ["SCAVIO_API_KEY"]
H = {"x-api-key": SCAVIO_KEY}
COMPETITORS = {"Tavily": ["tavily pricing", "tavily review", "tavily api"],
"Serper": ["serper pricing", "serper review", "serper api"]}
def search_keyword(keyword: str) -> list:
resp = requests.post("https://api.scavio.dev/api/v1/search", headers=H,
json={"platform": "google", "query": keyword}, timeout=10)
return [{"pos": r["position"], "title": r["title"], "url": r["link"]}
for r in resp.json().get("organic", [])[:10]]
baseline_path = Path("competitor_baseline.json")
baseline = json.loads(baseline_path.read_text()) if baseline_path.exists() else {}
today = {}
changes = []
for comp, keywords in COMPETITORS.items():
today[comp] = {}
for kw in keywords:
results = search_keyword(kw)
today[comp][kw] = results
old = baseline.get(comp, {}).get(kw, [])
old_urls = {r["url"] for r in old}
new_urls = {r["url"] for r in results}
new_content = new_urls - old_urls
if new_content:
changes.append(f"{comp} [{kw}]: {len(new_content)} new URLs")
baseline_path.write_text(json.dumps(today))
for c in changes:
print(c)JavaScript Implementation
import { readFileSync, writeFileSync, existsSync } from "fs";
const COMPETITORS = {
Tavily: ["tavily pricing", "tavily review", "tavily api"],
Serper: ["serper pricing", "serper review", "serper api"],
};
async function searchKeyword(keyword) {
const resp = await fetch("https://api.scavio.dev/api/v1/search", {
method: "POST",
headers: { "x-api-key": process.env.SCAVIO_API_KEY, "Content-Type": "application/json" },
body: JSON.stringify({ platform: "google", query: keyword })
});
return ((await resp.json()).organic || []).slice(0, 10).map(r => ({
pos: r.position, title: r.title, url: r.link
}));
}
const baselinePath = "competitor_baseline.json";
const baseline = existsSync(baselinePath) ? JSON.parse(readFileSync(baselinePath, "utf8")) : {};
const today = {};
const changes = [];
for (const [comp, keywords] of Object.entries(COMPETITORS)) {
today[comp] = {};
for (const kw of keywords) {
const results = await searchKeyword(kw);
today[comp][kw] = results;
const oldUrls = new Set((baseline[comp]?.[kw] || []).map(r => r.url));
const newUrls = results.filter(r => !oldUrls.has(r.url));
if (newUrls.length > 0) changes.push(`${comp} [${kw}]: ${newUrls.length} new URLs`);
}
}
writeFileSync(baselinePath, JSON.stringify(today, null, 2));
changes.forEach(c => console.log(c));Platforms Used
Web search with knowledge graph, PAA, and AI overviews
Community, posts & threaded comments from any subreddit