agentsdebuggingtools

Failed to Fetch in AI Agents: Diagnosis and Fix

LLMs cannot browse the web natively. Failed to fetch errors need proper search and fetch tools, not model upgrades. Root cause diagnosis guide.

7 min read

"Failed to fetch" in an AI agent almost never means the URL is down. It means the agent tried to access the web directly — and LLMs cannot browse the web. They have no HTTP client, no DNS resolver, no network stack. The fix is not a better model or a retry loop. The fix is giving the agent a proper search or fetch tool.

Why the error happens

When you tell an LLM "go to example.com and get the pricing," the model generates text that looks like it visited the URL. Some agent frameworks interpret a URL in the model output as an instruction to fetch it. The framework tries the request, gets blocked by anti-bot measures, CORS, or rate limiting, and returns "failed to fetch." The model did not fail. The tool layer did — or was never properly set up.

Common triggers:

  • Agent tries to fetch a JavaScript-rendered page with a simple HTTP GET
  • Target site blocks headless browsers or non-browser user agents
  • Agent framework has no fetch tool at all and the model is hallucinating web access
  • CORS policy blocks client-side requests in browser-based agents

The wrong fix: upgrading the model

Switching from GPT-4o-mini to GPT-4o does not fix "failed to fetch." Neither does switching to Claude or Gemini. The model is not the problem. The model never had web access to begin with. A more capable model might generate more convincing fake results — which is worse, not better.

The right fix: add a search tool

Give the agent a tool that actually queries the web and returns structured results. The agent calls the tool with a query, gets real data back, and uses that data in its response. No browser automation. No fetching arbitrary URLs.

Python
import requests

def agent_web_search(query: str) -> dict:
    """Tool function the agent calls for web data."""
    resp = requests.post(
        "https://api.scavio.dev/api/v1/search",
        headers={"x-api-key": "YOUR_API_KEY"},
        json={
            "query": query,
            "platform": "google",
            "num_results": 5
        }
    )
    if resp.status_code != 200:
        return {"error": f"Search API returned {resp.status_code}"}

    results = resp.json().get("results", [])
    return {
        "results": [
            {
                "title": r.get("title", ""),
                "url": r.get("url", ""),
                "snippet": r.get("snippet", "")
            }
            for r in results
        ]
    }

# Register this as a tool in your agent framework
# Instead of: agent.browse("https://example.com/pricing")
# Use: agent.search("example.com pricing page 2026")

Adding a fetch tool for specific URLs

Sometimes the agent needs content from a specific URL, not search results. For that, add a dedicated fetch tool that handles the HTTP request properly — with appropriate headers, timeouts, and error handling. TinyFish offers free search and fetch capabilities. For simple fetches, a basic requests call works.

Python
def agent_fetch_url(url: str) -> dict:
    """Fetch a specific URL with proper error handling."""
    try:
        resp = requests.get(
            url,
            headers={
                "User-Agent": (
                    "Mozilla/5.0 (compatible; NewsBot/1.0)"
                )
            },
            timeout=10
        )
        if resp.status_code == 403:
            return {
                "error": "Site blocked the request. "
                "Use search instead of direct fetch.",
                "suggestion": f"Search for: {url} content"
            }
        return {
            "status": resp.status_code,
            "content": resp.text[:5000],
            "url": url
        }
    except requests.exceptions.Timeout:
        return {"error": "Request timed out after 10 seconds"}
    except requests.exceptions.ConnectionError:
        return {"error": "Could not connect to the URL"}

The tool registration pattern

In most agent frameworks (LangChain, CrewAI, Autogen), you register tools with a name, description, and function. The description matters — it tells the model when to use the tool.

Python
# Generic tool registration pattern
tools = [
    {
        "name": "web_search",
        "description": (
            "Search the web for current information. "
            "Use this instead of trying to browse URLs. "
            "Returns titles, URLs, and snippets."
        ),
        "function": agent_web_search
    },
    {
        "name": "fetch_url",
        "description": (
            "Fetch content from a specific URL. "
            "Only use when you need a specific page. "
            "For general queries, use web_search instead."
        ),
        "function": agent_fetch_url
    }
]

Debugging checklist

If your agent still shows "failed to fetch" after adding tools:

  1. Check if the model is actually calling the tool or still trying to fetch directly — log tool calls.
  2. Verify the tool description explicitly says "use this instead of browsing."
  3. Check your API key is valid and has remaining credits.
  4. Test the search/fetch function outside the agent to isolate the issue.
  5. If the target site blocks even your fetch tool, search for the content instead of fetching the URL directly.

Cost of fixing this properly

Scavio: $0.005/search, 250 free/mo. SerpAPI: $25+/mo. TinyFish: free tier for search and fetch. The cost of not fixing it: an agent that cannot access web data, which defeats the purpose of building the agent in the first place.