Tutorial

How to Monitor Legal Case Updates with a Search API

Track court filings, legal case updates, and regulatory changes using automated Google and Reddit search. Build a legal monitoring agent.

Legal cases (like Google vs SerpAPI, Anthropic vs Reddit) generate filings, news articles, and Reddit discussions over months. Manual tracking is tedious. This tutorial automates case monitoring: daily Google searches for new filings and Reddit searches for community analysis.

Prerequisites

  • Python 3.8+
  • A Scavio API key
  • Case names or parties to monitor

Walkthrough

Step 1: Define cases to monitor

Set up a list of legal cases with search queries for each.

Python
CASES = [
    {
        'name': 'Google vs SerpAPI (DMCA)',
        'google_queries': ['serpapi google dmca lawsuit 2026', 'serpapi motion dismiss 2026'],
        'reddit_queries': ['serpapi lawsuit', 'serpapi google dmca'],
    },
    {
        'name': 'Anthropic vs Reddit (Data)',
        'google_queries': ['anthropic reddit lawsuit 2026', 'anthropic reddit data scraping case'],
        'reddit_queries': ['anthropic reddit lawsuit', 'anthropic data case'],
    },
]

Step 2: Build the monitoring function

Search Google and Reddit for each case and track new results.

Python
import requests, os, json
from datetime import date
from pathlib import Path

H = {'x-api-key': os.environ['SCAVIO_API_KEY'], 'Content-Type': 'application/json'}

def monitor_case(case: dict) -> dict:
    updates = {'name': case['name'], 'date': date.today().isoformat(), 'google': [], 'reddit': []}
    
    for query in case['google_queries']:
        resp = requests.post('https://api.scavio.dev/api/v1/search', headers=H,
            json={'platform': 'google', 'query': query}, timeout=10)
        for r in resp.json().get('organic', [])[:3]:
            updates['google'].append({'title': r.get('title',''), 'url': r.get('link',''), 'snippet': r.get('snippet','')})
    
    for query in case['reddit_queries']:
        resp = requests.post('https://api.scavio.dev/api/v1/search', headers=H,
            json={'platform': 'reddit', 'query': query}, timeout=10)
        for r in resp.json().get('organic', [])[:3]:
            updates['reddit'].append({'title': r.get('title',''), 'url': r.get('link',''), 'score': r.get('score',0)})
    
    return updates

Step 3: Detect new results vs known

Compare today's results against previously seen URLs to surface only new filings.

Python
def check_for_new(case: dict, history_file: str = 'legal_history.json') -> dict:
    history = json.loads(Path(history_file).read_text()) if Path(history_file).exists() else {}
    case_key = case['name']
    seen_urls = set(history.get(case_key, []))
    
    updates = monitor_case(case)
    all_urls = [r['url'] for r in updates['google'] + updates['reddit']]
    new_urls = [u for u in all_urls if u and u not in seen_urls]
    
    # Update history
    history[case_key] = list(seen_urls | set(all_urls))
    Path(history_file).write_text(json.dumps(history))
    
    new_results = [r for r in updates['google'] + updates['reddit'] if r.get('url') in new_urls]
    return {'case': case['name'], 'new_count': len(new_results), 'new': new_results}

Step 4: Generate daily report

Run all case monitors and produce a summary.

Python
def daily_legal_report() -> dict:
    report = {'date': date.today().isoformat(), 'cases': []}
    for case in CASES:
        result = check_for_new(case)
        if result['new_count'] > 0:
            report['cases'].append(result)
    report['total_new'] = sum(c['new_count'] for c in report['cases'])
    return report

r = daily_legal_report()
print(f"Legal update {r['date']}: {r['total_new']} new items")
for case in r['cases']:
    print(f"  {case['case']}: {case['new_count']} new")
    for item in case['new'][:2]:
        print(f"    - {item.get('title','')}")

Python Example

Python
import requests, os
H = {'x-api-key': os.environ['SCAVIO_API_KEY'], 'Content-Type': 'application/json'}

def legal_search(case_query):
    g = requests.post('https://api.scavio.dev/api/v1/search', headers=H,
        json={'platform': 'google', 'query': case_query}).json()
    r = requests.post('https://api.scavio.dev/api/v1/search', headers=H,
        json={'platform': 'reddit', 'query': case_query}).json()
    return {'google': g.get('organic',[])[:3], 'reddit': r.get('organic',[])[:3]}

JavaScript Example

JavaScript
async function legalSearch(caseQuery) {
  const H = {'x-api-key': process.env.SCAVIO_API_KEY, 'Content-Type': 'application/json'};
  const [g, r] = await Promise.all([
    fetch('https://api.scavio.dev/api/v1/search', {method:'POST', headers:H, body:JSON.stringify({platform:'google', query:caseQuery})}).then(r=>r.json()),
    fetch('https://api.scavio.dev/api/v1/search', {method:'POST', headers:H, body:JSON.stringify({platform:'reddit', query:caseQuery})}).then(r=>r.json())
  ]);
  return {google: (g.organic||[]).slice(0,3), reddit: (r.organic||[]).slice(0,3)};
}

Expected Output

JSON
An automated legal case monitor that searches daily for new filings, news articles, and Reddit discussions about tracked cases.

Related Tutorials

Frequently Asked Questions

Most developers complete this tutorial in 15 to 30 minutes. You will need a Scavio API key (free tier works) and a working Python or JavaScript environment.

Python 3.8+. A Scavio API key. Case names or parties to monitor. A Scavio API key gives you 500 free credits per month.

Yes. The free tier includes 500 credits per month, which is more than enough to complete this tutorial and prototype a working solution.

Scavio has a native LangChain package (langchain-scavio), an MCP server, and a plain REST API that works with any HTTP client. This tutorial uses the raw REST API, but you can adapt to your framework of choice.

Start Building

Track court filings, legal case updates, and regulatory changes using automated Google and Reddit search. Build a legal monitoring agent.