Tutorial

How to Migrate a Web Scraper to a Search API

Learn how to replace a BeautifulSoup or Playwright web scraper with a structured search API, eliminating proxy costs and HTML parsing maintenance.

Web scrapers that parse Google, Reddit, or Amazon HTML are the most brittle part of any data pipeline. When the target site changes its layout, your scraper breaks. When they detect your traffic, you get blocked. When you scale up, proxy costs spike. A structured search API returns the same data as clean JSON, with no parsing, no proxies, and no maintenance. This tutorial shows how to replace a typical scraper with Scavio's API, step by step.

Prerequisites

  • Python 3.8+ installed
  • An existing scraper you want to migrate (BeautifulSoup, Playwright, or Selenium)
  • A Scavio API key from scavio.dev

Walkthrough

Step 1: Audit your scraper's data output

Identify what fields your scraper currently extracts. Most Google scrapers extract: title, URL, snippet, position.

Python
# Typical scraper output:
# [
#   {'title': '...', 'url': '...', 'snippet': '...', 'position': 1},
#   {'title': '...', 'url': '...', 'snippet': '...', 'position': 2},
# ]
#
# Scavio's 'organic' array returns the same fields:
# [
#   {'title': '...', 'link': '...', 'snippet': '...', 'position': 1},
# ]
# Only difference: 'url' -> 'link'

Step 2: Replace the scraping function

Replace your scraping code with a single API call.

Python
import requests, os

# BEFORE: 150 lines of scraping code
# from bs4 import BeautifulSoup
# import random
# PROXIES = [...]
# def scrape_google(query):
#     proxy = random.choice(PROXIES)
#     resp = requests.get(f'https://www.google.com/search?q={query}',
#         proxies={'https': proxy}, headers={'User-Agent': ...})
#     soup = BeautifulSoup(resp.text, 'html.parser')
#     results = []
#     for div in soup.select('div.g'):
#         ... # 100 lines of parsing

# AFTER: 10 lines
def search_google(query: str) -> list:
    resp = requests.post('https://api.scavio.dev/api/v1/search',
        headers={'x-api-key': os.environ['SCAVIO_API_KEY']},
        json={'platform': 'google', 'query': query}, timeout=10)
    return [{'title': r['title'], 'url': r['link'], 'snippet': r['snippet'], 'position': r.get('position', i+1)}
            for i, r in enumerate(resp.json().get('organic', []))]

Step 3: Update field references downstream

If your code references scraper-specific field names, update them.

Bash
# Find all references to the old scraper output format:
# grep -r 'scrape_google\|from scraper\|import scraper' .

# Common field mapping:
# Old scraper  -> Scavio API
# result.url   -> result.link
# result.desc  -> result.snippet
# result.rank  -> result.position

Step 4: Remove proxy and parser dependencies

Clean up your requirements file and remove scraping infrastructure.

Bash
# Remove from requirements.txt:
# beautifulsoup4
# lxml
# playwright
# selenium
# webdriver-manager
# fake-useragent
# rotating-proxies

# Remove proxy configuration files
# Cancel proxy subscription (saves $50-200/month)

# Your requirements.txt now just needs:
# requests

Python Example

Python
# Migration summary:
# Before: 150 lines + proxy subscription + maintenance
# After: 10 lines + $0.003/query + zero maintenance

import requests, os
H = {'x-api-key': os.environ['SCAVIO_API_KEY']}

def search(query, platform='google'):
    return requests.post('https://api.scavio.dev/api/v1/search',
        headers=H, json={'platform': platform, 'query': query},
        timeout=10).json().get('organic', [])

JavaScript Example

JavaScript
// Before: Playwright + proxy rotation + HTML parsing
// After:
async function search(query, platform = 'google') {
  const resp = await fetch('https://api.scavio.dev/api/v1/search', {
    method: 'POST', headers: {'x-api-key': process.env.SCAVIO_API_KEY, 'Content-Type': 'application/json'},
    body: JSON.stringify({platform, query})
  });
  return (await resp.json()).organic || [];
}

Expected Output

JSON
A clean search function replacing hundreds of lines of scraping code. No proxies, no parsing, no maintenance.

Related Tutorials

Frequently Asked Questions

Most developers complete this tutorial in 15 to 30 minutes. You will need a Scavio API key (free tier works) and a working Python or JavaScript environment.

Python 3.8+ installed. An existing scraper you want to migrate (BeautifulSoup, Playwright, or Selenium). A Scavio API key from scavio.dev. A Scavio API key gives you 500 free credits per month.

Yes. The free tier includes 500 credits per month, which is more than enough to complete this tutorial and prototype a working solution.

Scavio has a native LangChain package (langchain-scavio), an MCP server, and a plain REST API that works with any HTTP client. This tutorial uses the raw REST API, but you can adapt to your framework of choice.

Start Building

Learn how to replace a BeautifulSoup or Playwright web scraper with a structured search API, eliminating proxy costs and HTML parsing maintenance.