Tutorial

How to Analyze Reddit Sentiment with an LLM

Analyze sentiment across a Reddit thread with Python and an LLM. Fetch the post and all comments with Scavio, then classify each reply.

Sentiment analysis across a Reddit thread tells you at a glance whether a community is enthusiastic, skeptical, or split. With a single API call you can pull the post plus all comments, and with a short LLM prompt you can classify each reply and aggregate. This tutorial uses Scavio for the fetch and Anthropic's Claude for the classification, but the structure works for any provider.

Prerequisites

  • Python 3.10+
  • anthropic and requests libraries
  • A Scavio API key and an Anthropic API key
  • A Reddit thread URL to analyze

Walkthrough

Step 1: Fetch the thread

Pull the post plus all comments in one call.

Python
import os, requests

resp = requests.post(
    "https://api.scavio.dev/api/v1/reddit/post",
    headers={"Authorization": f"Bearer {os.environ['SCAVIO_API_KEY']}"},
    json={"url": "https://www.reddit.com/r/Python/comments/1smb9du/"},
    timeout=30,
)
comments = resp.json()["data"]["comments"]

Step 2: Classify each comment

Send each comment body to Claude and ask for a single label: positive, neutral, or negative.

Python
from anthropic import Anthropic

client = Anthropic()

def classify(text: str) -> str:
    msg = client.messages.create(
        model="claude-opus-4-6",
        max_tokens=10,
        messages=[{"role": "user", "content": f"One word: positive, neutral, or negative?\n\n{text}"}],
    )
    return msg.content[0].text.strip().lower()

Step 3: Aggregate and report

Run classify over top-level comments, count labels, and print a summary.

Python
from collections import Counter

top_level = [c for c in comments if c["depth"] == 0][:25]
labels = [classify(c["body"]) for c in top_level]
print(Counter(labels))

Python Example

Python
import os, requests
from collections import Counter
from anthropic import Anthropic

SCAVIO_KEY = os.environ["SCAVIO_API_KEY"]
client = Anthropic()

def fetch_thread(url: str):
    r = requests.post(
        "https://api.scavio.dev/api/v1/reddit/post",
        headers={"Authorization": f"Bearer {SCAVIO_KEY}"},
        json={"url": url},
        timeout=30,
    )
    r.raise_for_status()
    return r.json()["data"]

def classify(text: str) -> str:
    msg = client.messages.create(
        model="claude-opus-4-6",
        max_tokens=10,
        messages=[{"role": "user", "content": f"One word: positive, neutral, or negative?\n\n{text}"}],
    )
    return msg.content[0].text.strip().lower()

data = fetch_thread("https://www.reddit.com/r/Python/comments/1smb9du/")
top = [c for c in data["comments"] if c["depth"] == 0][:25]
labels = [classify(c["body"]) for c in top]
print("Sentiment on", data["post"]["title"])
print(Counter(labels))

JavaScript Example

JavaScript
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic();
const SCAVIO_KEY = process.env.SCAVIO_API_KEY;

async function fetchThread(url) {
  const r = await fetch("https://api.scavio.dev/api/v1/reddit/post", {
    method: "POST",
    headers: {
      Authorization: `Bearer ${SCAVIO_KEY}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({ url }),
  });
  return (await r.json()).data;
}

async function classify(text) {
  const msg = await client.messages.create({
    model: "claude-opus-4-6",
    max_tokens: 10,
    messages: [{ role: "user", content: `One word: positive, neutral, or negative?\n\n${text}` }],
  });
  return msg.content[0].text.trim().toLowerCase();
}

const data = await fetchThread("https://www.reddit.com/r/Python/comments/1smb9du/");
const top = data.comments.filter((c) => c.depth === 0).slice(0, 25);
const labels = await Promise.all(top.map((c) => classify(c.body)));
const counts = labels.reduce((a, l) => ({ ...a, [l]: (a[l] ?? 0) + 1 }), {});
console.log(counts);

Expected Output

JSON
Sentiment on FastAPI vs Django in 2026 -- what the teams are actually using
Counter({'positive': 14, 'neutral': 8, 'negative': 3})

Related Tutorials

Frequently Asked Questions

Most developers complete this tutorial in 15 to 30 minutes. You will need a Scavio API key (free tier works) and a working Python or JavaScript environment.

Python 3.10+. anthropic and requests libraries. A Scavio API key and an Anthropic API key. A Reddit thread URL to analyze. A Scavio API key gives you 500 free credits per month.

Yes. The free tier includes 500 credits per month, which is more than enough to complete this tutorial and prototype a working solution.

Scavio has a native LangChain package (langchain-scavio), an MCP server, and a plain REST API that works with any HTTP client. This tutorial uses the raw REST API, but you can adapt to your framework of choice.

Start Building

Analyze sentiment across a Reddit thread with Python and an LLM. Fetch the post and all comments with Scavio, then classify each reply.