r/AIRankingStrategy confirms the 2026 pattern: AI engine citations lag SEO rankings by weeks to months. A page ranking in Google today will show up in ChatGPT, Claude, or Perplexity citations later. This tutorial builds the tracker that measures the lag.
Prerequisites
- Python 3.10+
- A Scavio API key
- Postgres or DuckDB for time-series storage
Walkthrough
Step 1: Pull SERP rankings daily
Standard Google SERP snapshot per tracked keyword.
import requests, os, datetime
API_KEY = os.environ['SCAVIO_API_KEY']
def serp_snapshot(keyword):
r = requests.post('https://api.scavio.dev/api/v1/search',
headers={'x-api-key': API_KEY},
json={'query': keyword, 'num_results': 10})
return [x['link'] for x in r.json().get('organic_results', [])]Step 2: Pull AI Overviews citations
Google AI Overviews cite sources directly.
def ai_overview_citations(keyword):
r = requests.post('https://api.scavio.dev/api/v1/search',
headers={'x-api-key': API_KEY},
json={'query': keyword, 'include_ai_overview': True})
ao = r.json().get('ai_overview', {})
return ao.get('citations', [])Step 3: Store both into a time-series
One row per (keyword, url, date, surface).
import duckdb
db = duckdb.connect('citations.duckdb')
db.execute('CREATE TABLE IF NOT EXISTS citations(keyword TEXT, url TEXT, date DATE, surface TEXT)')
def record(keyword):
today = datetime.date.today()
for u in serp_snapshot(keyword):
db.execute('INSERT INTO citations VALUES (?, ?, ?, ?)', (keyword, u, today, 'serp'))
for c in ai_overview_citations(keyword):
db.execute('INSERT INTO citations VALUES (?, ?, ?, ?)', (keyword, c, today, 'ai_overview'))Step 4: Compute the citation lag
For each URL, measure days between first SERP rank and first AI citation.
def lag_days(keyword):
return db.execute('''
SELECT url,
MIN(CASE WHEN surface='serp' THEN date END) AS first_serp,
MIN(CASE WHEN surface='ai_overview' THEN date END) AS first_ai
FROM citations WHERE keyword=?
GROUP BY url
''', (keyword,)).fetchall()Step 5: Visualize the lag distribution
Expect a long tail: most URLs lag 30 to 90 days.
# Plot with matplotlib or pipe into Superset
import statistics
def mean_lag(keyword):
rows = lag_days(keyword)
diffs = [(r[2] - r[1]).days for r in rows if r[1] and r[2]]
return statistics.mean(diffs) if diffs else NonePython Example
import os, requests
API_KEY = os.environ['SCAVIO_API_KEY']
def track(keyword):
r = requests.post('https://api.scavio.dev/api/v1/search',
headers={'x-api-key': API_KEY},
json={'query': keyword, 'include_ai_overview': True})
return {'serp': r.json().get('organic_results', []), 'ao': r.json().get('ai_overview', {})}
print(track('best ai web scraping tools 2026'))JavaScript Example
const API_KEY = process.env.SCAVIO_API_KEY;
export async function track(keyword) {
const r = await fetch('https://api.scavio.dev/api/v1/search', {
method: 'POST',
headers: { 'x-api-key': API_KEY, 'Content-Type': 'application/json' },
body: JSON.stringify({ query: keyword, include_ai_overview: true })
});
const d = await r.json();
return { serp: d.organic_results || [], ao: d.ai_overview || {} };
}Expected Output
Time-series of SERP ranks vs AI citations per keyword. Mean citation lag visible, queryable per URL.