Reddit is where your users talk about you without a marketing filter. This tutorial builds a brand-mention tracker that records every post containing your brand name and writes it to a CSV you can load into a BI tool. It is a lightweight alternative to enterprise social listening suites and costs 2 credits per run.
Prerequisites
- Python 3.8+
- requests library
- A Scavio API key
- A list of brand keywords (product names, misspellings, handles)
Walkthrough
Step 1: Define the keywords to track
Include common misspellings and product names. Each keyword is one search request.
KEYWORDS = ["scavio", "scavio api", "scavio.dev"]Step 2: Fetch posts for each keyword
Loop over keywords, sort by new, and collect the posts into a single list.
import os, requests
KEY = os.environ["SCAVIO_API_KEY"]
def search(q):
r = requests.post(
"https://api.scavio.dev/api/v1/reddit/search",
headers={"Authorization": f"Bearer {KEY}"},
json={"query": q, "sort": "new"},
timeout=30,
)
return r.json()["data"]["posts"]
posts = []
for k in KEYWORDS:
posts.extend(search(k))Step 3: Write to CSV
One row per post with the fields you want to trend over time.
import csv
with open("brand_mentions.csv", "w", newline="") as f:
w = csv.writer(f)
w.writerow(["id", "subreddit", "author", "title", "timestamp", "url"])
for p in posts:
w.writerow([p["id"], p["subreddit"], p["author"], p["title"], p["timestamp"], p["url"]])Step 4: Schedule it
Run the script on cron or GitHub Actions. Append to the same CSV over time to build a trend dataset.
# crontab -e
# 0 * * * * /usr/bin/python3 /path/to/track_reddit.py >> /var/log/track_reddit.log 2>&1Python Example
import os, csv, requests, pathlib
KEY = os.environ["SCAVIO_API_KEY"]
KEYWORDS = ["scavio", "scavio api"]
OUT = pathlib.Path("brand_mentions.csv")
def search(q):
r = requests.post(
"https://api.scavio.dev/api/v1/reddit/search",
headers={"Authorization": f"Bearer {KEY}"},
json={"query": q, "sort": "new"},
timeout=30,
)
r.raise_for_status()
return r.json()["data"]["posts"]
def main():
rows = []
for k in KEYWORDS:
for p in search(k):
rows.append([p["id"], p["subreddit"], p["author"], p["title"], p["timestamp"], p["url"], k])
new_file = not OUT.exists()
with OUT.open("a", newline="") as f:
w = csv.writer(f)
if new_file:
w.writerow(["id", "subreddit", "author", "title", "timestamp", "url", "matched_keyword"])
w.writerows(rows)
print(f"wrote {len(rows)} rows")
if __name__ == "__main__":
main()JavaScript Example
import fs from "node:fs";
const KEY = process.env.SCAVIO_API_KEY;
const KEYWORDS = ["scavio", "scavio api"];
const OUT = "brand_mentions.csv";
async function search(q) {
const r = await fetch("https://api.scavio.dev/api/v1/reddit/search", {
method: "POST",
headers: {
Authorization: `Bearer ${KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({ query: q, sort: "new" }),
});
return (await r.json()).data.posts;
}
const rows = [];
for (const k of KEYWORDS) {
for (const p of await search(k)) {
rows.push([p.id, p.subreddit, p.author, p.title, p.timestamp, p.url, k]);
}
}
if (!fs.existsSync(OUT)) {
fs.writeFileSync(OUT, "id,subreddit,author,title,timestamp,url,matched_keyword\n");
}
fs.appendFileSync(OUT, rows.map((r) => r.map((v) => JSON.stringify(v)).join(",")).join("\n") + "\n");Expected Output
wrote 12 rows
brand_mentions.csv:
id,subreddit,author,title,timestamp,url,matched_keyword
t3_1smxyz1,SaaS,marketer42,"Has anyone used scavio?",2026-04-16T09:12:00+0000,https://...,scavio
t3_1smxyz2,devtools,engineer7,"scavio vs serpapi review",2026-04-16T10:40:00+0000,https://...,scavio