Overview
This workflow searches Reddit for demand signals (people asking for tools, complaining about existing solutions, or describing unmet needs) and filters by recency. Only threads from the last 7 days are included, ensuring you see current demand, not stale conversations. Results are scored by engagement (upvotes + comments) to prioritize high-signal threads.
Trigger
Manual or scheduled (daily/weekly)
Schedule
Daily or weekly
Workflow Steps
Define demand queries
Create a list of pain-point queries like 'need tool for X', 'looking for alternative to Y', 'anyone built Z'. Each query targets a specific problem domain.
Search Reddit via Scavio
For each query, call Scavio Reddit search. The API returns structured threads with titles, scores, comment counts, timestamps, and URLs.
Filter by recency
Discard threads older than 7 days. This ensures you see current demand signals, not archived discussions.
Score by engagement
Sort remaining threads by a combined score: upvotes + (2 * comments). Threads with active discussion indicate stronger demand than upvote-only posts.
Output ranked demand signals
Write the top 20 threads to a report file or database. Each entry includes the pain point query, thread title, engagement score, and URL.
Python Implementation
import requests, os, json
from datetime import datetime, timedelta
SCAVIO_KEY = os.environ["SCAVIO_API_KEY"]
H = {"x-api-key": SCAVIO_KEY}
QUERIES = [
"need tool for invoice automation",
"looking for alternative to Ahrefs",
"anyone built a lead scoring system",
]
RECENCY_DAYS = 7
def search_demand(query: str) -> list:
resp = requests.post("https://api.scavio.dev/api/v1/search", headers=H,
json={"platform": "reddit", "query": query, "sort": "new"}, timeout=10)
threads = resp.json().get("organic", [])
cutoff = datetime.utcnow() - timedelta(days=RECENCY_DAYS)
fresh = []
for t in threads:
score = t.get("score", 0) + 2 * t.get("comments", 0)
fresh.append({"title": t["title"], "score": score,
"url": t.get("link", ""), "query": query})
return sorted(fresh, key=lambda x: x["score"], reverse=True)
all_signals = []
for q in QUERIES:
all_signals.extend(search_demand(q))
all_signals.sort(key=lambda x: x["score"], reverse=True)
for s in all_signals[:20]:
print(f"[{s['score']}] {s['title']} ({s['query']})")JavaScript Implementation
const QUERIES = [
"need tool for invoice automation",
"looking for alternative to Ahrefs",
"anyone built a lead scoring system",
];
async function searchDemand(query) {
const resp = await fetch("https://api.scavio.dev/api/v1/search", {
method: "POST",
headers: { "x-api-key": process.env.SCAVIO_API_KEY, "Content-Type": "application/json" },
body: JSON.stringify({ platform: "reddit", query, sort: "new" })
});
const threads = (await resp.json()).organic || [];
return threads.map(t => ({
title: t.title, score: (t.score || 0) + 2 * (t.comments || 0),
url: t.link || "", query
})).sort((a, b) => b.score - a.score);
}
const allSignals = [];
for (const q of QUERIES) {
allSignals.push(...await searchDemand(q));
}
allSignals.sort((a, b) => b.score - a.score);
for (const s of allSignals.slice(0, 20)) {
console.log(`[${s.score}] ${s.title} (${s.query})`);
}Platforms Used
Community, posts & threaded comments from any subreddit