Tutorial

How to Add Web Search to Cursor Background Agent

Give Cursor's background agent live web search via MCP. Ground code suggestions and research tasks with current data from Google, Reddit, and docs.

Cursor's background agent can run tasks autonomously, but without web search it is limited to the context in your codebase and its training data. Adding a search MCP server gives the background agent the ability to look up current documentation, check API changes, research error messages, and verify facts in real time. This tutorial shows how to add the Scavio MCP server to Cursor so the background agent can query Google, Reddit, YouTube, and other platforms during its work. The setup takes under two minutes and requires no code changes.

Prerequisites

  • Cursor IDE installed (v0.40+ with background agent support)
  • A Scavio API key from scavio.dev
  • MCP support enabled in Cursor settings

Walkthrough

Step 1: Open Cursor MCP settings

Navigate to Cursor settings to add a new MCP server configuration.

Python
# Open Cursor Settings > MCP Servers
# Or edit .cursor/mcp.json in your project root
# Or edit ~/.cursor/mcp.json for global config

Step 2: Add Scavio MCP server

Add the Scavio MCP server configuration to your Cursor MCP settings.

Python
# Add to .cursor/mcp.json:
# {
#   "mcpServers": {
#     "scavio": {
#       "url": "https://mcp.scavio.dev/mcp",
#       "headers": {
#         "x-api-key": "your_scavio_api_key"
#       }
#     }
#   }
# }

Step 3: Verify the connection

Test that Cursor recognizes the Scavio MCP server and its tools.

Python
# After saving mcp.json, restart Cursor
# Check Settings > MCP Servers - scavio should show as connected
# The background agent now has access to:
# - search: query Google, Reddit, YouTube, Amazon, Walmart
# - extract: fetch and parse web page content

Step 4: Test with a background agent task

Run a task that requires live web data to verify the integration works.

Python
import os, requests

# Equivalent API call the background agent makes:
API_KEY = os.environ["SCAVIO_API_KEY"]
resp = requests.post("https://api.scavio.dev/api/v1/search",
    headers={"x-api-key": API_KEY},
    json={"platform": "google", "query": "Cursor IDE MCP setup guide 2026"})
for r in resp.json().get("organic_results", [])[:3]:
    print(f"{r['title']}: {r['link']}")

Python Example

Python
import os, requests
API_KEY = os.environ["SCAVIO_API_KEY"]
resp = requests.post("https://api.scavio.dev/api/v1/search",
    headers={"x-api-key": API_KEY},
    json={"platform": "google", "query": "Cursor background agent setup"})
for r in resp.json().get("organic_results", [])[:5]:
    print(r["title"])

JavaScript Example

JavaScript
const H = {"x-api-key": process.env.SCAVIO_API_KEY, "Content-Type": "application/json"};
const r = await fetch("https://api.scavio.dev/api/v1/search", {
  method: "POST", headers: H,
  body: JSON.stringify({platform: "google", query: "Cursor background agent setup"})
});
(await r.json()).organic_results.slice(0,5).forEach(r => console.log(r.title));

Expected Output

JSON
A Cursor IDE with Scavio MCP configured, giving the background agent live web search capabilities for research, documentation lookups, and fact verification.

Related Tutorials

Frequently Asked Questions

Most developers complete this tutorial in 15 to 30 minutes. You will need a Scavio API key (free tier works) and a working Python or JavaScript environment.

Cursor IDE installed (v0.40+ with background agent support). A Scavio API key from scavio.dev. MCP support enabled in Cursor settings. A Scavio API key gives you 250 free credits per month.

Yes. The free tier includes 250 credits per month, which is more than enough to complete this tutorial and prototype a working solution.

Scavio has a native LangChain package (langchain-scavio), an MCP server, and a plain REST API that works with any HTTP client. This tutorial uses the raw REST API, but you can adapt to your framework of choice.

Start Building

Give Cursor's background agent live web search via MCP. Ground code suggestions and research tasks with current data from Google, Reddit, and docs.