n8nautomationreviews

n8n Review Summarization with Search API Context

Add competitive search context to n8n review summarization pipelines with Claude and search API.

7 min

Adding a search API step before LLM summarization in n8n gives the model competitive context it cannot get from your internal data alone. A workflow that pulls customer reviews, fetches competitor reviews via search, and feeds both to Claude produces summaries that highlight your actual differentiators instead of generic sentiment analysis.

The missing piece in review summarization

Most n8n review summarization workflows follow the same pattern: pull reviews from your database or API, send them to an LLM, get a summary. The output says things like "customers love the battery life" or "shipping speed is a common complaint." This is useful but lacks context. Are competitors getting the same complaints? Is your battery life actually better than alternatives? Without competitive data, the summary is a mirror, not a map.

n8n workflow structure

The enhanced pipeline has four nodes: (1) Schedule Trigger runs weekly, (2) HTTP Request pulls your product reviews from your database or app, (3) HTTP Request hits a search API for competitor reviews, (4) Claude node summarizes both sets with comparative analysis. The search step adds one API call per competitor keyword.

Search node configuration in n8n

In n8n, add an HTTP Request node pointed at the search API. Set method to POST, URL to the search endpoint, and add your API key header. The JSON body takes a platform and query parameter.

JSON
{
  "method": "POST",
  "url": "https://api.scavio.dev/api/v1/search",
  "headers": {
    "x-api-key": "{{ $env.SCAVIO_API_KEY }}"
  },
  "body": {
    "platform": "google",
    "query": "{{ $json.product_name }} reviews 2026"
  }
}

Python equivalent for the full pipeline

If you prefer Python over n8n's visual editor, the same pipeline runs as a script. This is also useful for testing the logic before building it in n8n.

Python
import requests, os
from anthropic import Anthropic

SEARCH_API = 'https://api.scavio.dev/api/v1/search'
SEARCH_H = {'x-api-key': os.environ['SCAVIO_API_KEY']}
claude = Anthropic()

def get_competitor_context(product_name: str, competitors: list[str]):
    context = []
    for comp in competitors:
        resp = requests.post(SEARCH_API, headers=SEARCH_H, json={
            'platform': 'google',
            'query': f'{comp} reviews 2026',
        }, timeout=15)
        results = resp.json().get('organic_results', [])[:5]
        snippets = [r.get('snippet', '') for r in results]
        context.append({'competitor': comp, 'review_snippets': snippets})
    return context

def summarize_with_context(own_reviews: list[str], competitor_data: list[dict]):
    prompt = f"""Summarize these customer reviews with competitive context.

OUR REVIEWS:
{chr(10).join(own_reviews[:20])}

COMPETITOR REVIEW SNIPPETS:
{chr(10).join(f"{d['competitor']}: {' | '.join(d['review_snippets'])}" for d in competitor_data)}

Output: (1) Top 3 strengths vs competitors, (2) Top 3 weaknesses vs competitors,
(3) Suggested ad copy angles based on real differentiators."""

    resp = claude.messages.create(
        model='claude-sonnet-4-20250514',
        max_tokens=1024,
        messages=[{'role': 'user', 'content': prompt}],
    )
    return resp.content[0].text

# Example usage
own_reviews = ['Battery lasts 2 days easily', 'Sound quality is incredible']
competitors = ['Sony WH-1000XM6', 'Bose QC Ultra']
context = get_competitor_context('MyHeadphones Pro', competitors)
summary = summarize_with_context(own_reviews, context)
print(summary)

Ad copy generation with grounded data

The same pipeline extends to ad copy. Instead of asking Claude to generate ad variants from your reviews alone, give it the competitive landscape. "Our battery lasts 2 days while the top competitor averages 30 hours" is a stronger ad angle than "long battery life" because it is grounded in real search data.

Cost per weekly run

A weekly pipeline tracking 5 competitors with 1 search query each costs 5 credits ($0.025). Add 5 more queries for your own product monitoring and the total is 10 queries/week, $0.05/week, $0.20/mo. The Claude API call for summarization costs roughly $0.01-0.03 depending on input length. Total pipeline cost: under $1/mo for weekly competitive review intelligence. This fits comfortably in Scavio's free tier of 250 credits/mo.