Tutorial

How to Benchmark SERP API Uptime and Reliability

Build a simple uptime checker for multiple SERP APIs. Track response times, error rates, and reliability over time.

An r/Perplexity user reported Sonar API credits wiped. An r/ComplexWebScraping user praised HasData because it never drops. How do you know which API is reliable? Build your own uptime benchmark.

Prerequisites

  • API keys for 2-3 SERP APIs to compare
  • Python 3.8+
  • SQLite or PostgreSQL for logging

Walkthrough

Step 1: Define the benchmark queries

Use the same queries across all APIs for fair comparison.

Python
benchmark_queries = [
    'best crm software 2026',
    'python web framework comparison',
    'machine learning deployment',
    'competitor analysis tools',
    'api pricing comparison',
]

Step 2: Build the test harness

Call each API with timing and error tracking.

Python
import requests, time, sqlite3

def test_api(name, url, headers, payload):
    start = time.time()
    try:
        r = requests.post(url, headers=headers, json=payload, timeout=30)
        return {'api': name, 'status': r.status_code, 'latency': time.time()-start, 'success': r.status_code==200}
    except Exception as e:
        return {'api': name, 'status': 0, 'latency': time.time()-start, 'success': False, 'error': str(e)}

Step 3: Log results to database

Store each test result with timestamp.

Python
conn = sqlite3.connect('api_uptime.db')
conn.execute('CREATE TABLE IF NOT EXISTS results (ts TEXT, api TEXT, query TEXT, status INT, latency REAL, success INT)')

def log_result(result, query):
    conn.execute('INSERT INTO results VALUES (datetime("now"), ?, ?, ?, ?, ?)',
        (result['api'], query, result['status'], result['latency'], int(result['success'])))
    conn.commit()

Step 4: Run on a cron schedule

Test every 15 minutes for a week to get meaningful data.

Bash
# crontab: */15 * * * * python benchmark.py
# After 1 week: 672 data points per API
# Compare: uptime %, P50/P95 latency, error patterns

Step 5: Generate a comparison report

SQL queries to compare uptime and latency.

SELECT api, 
  COUNT(*) as total,
  SUM(success)*100.0/COUNT(*) as uptime_pct,
  AVG(latency) as avg_latency,
  MAX(latency) as p100_latency
FROM results
GROUP BY api

Python Example

Python
# Full benchmark harness:
import requests, time, os
H = {'x-api-key': os.environ['SCAVIO_API_KEY']}

def bench_scavio(query):
    start = time.time()
    r = requests.post('https://api.scavio.dev/api/v1/search', headers=H,
        json={'platform': 'google', 'query': query})
    return {'latency': time.time()-start, 'status': r.status_code}

JavaScript Example

JavaScript
const start = Date.now();
const resp = await fetch('https://api.scavio.dev/api/v1/search', {
  method: 'POST', headers: {'x-api-key': process.env.SCAVIO_API_KEY, 'Content-Type': 'application/json'},
  body: JSON.stringify({platform: 'google', query: 'test query'})
});
console.log(`Latency: ${Date.now() - start}ms`);

Expected Output

JSON
SQLite database with uptime and latency data for multiple SERP APIs. Weekly report showing uptime %, P50/P95 latency, and error patterns.

Related Tutorials

Frequently Asked Questions

Most developers complete this tutorial in 15 to 30 minutes. You will need a Scavio API key (free tier works) and a working Python or JavaScript environment.

API keys for 2-3 SERP APIs to compare. Python 3.8+. SQLite or PostgreSQL for logging. A Scavio API key gives you 500 free credits per month.

Yes. The free tier includes 500 credits per month, which is more than enough to complete this tutorial and prototype a working solution.

Scavio has a native LangChain package (langchain-scavio), an MCP server, and a plain REST API that works with any HTTP client. This tutorial uses the raw REST API, but you can adapt to your framework of choice.

Start Building

Build a simple uptime checker for multiple SERP APIs. Track response times, error rates, and reliability over time.