glmtool-callingscavio

GLM Web Search Tool: 12-Line Wrapper

Zhipu GLM accepts the OpenAI tool-calling shape. Scavio plugs in as a function. The whole wrapper is 12 lines.

4 min read

An r/ZaiGLM post asked: can anyone use web_search with GLM? Yes. GLM (Zhipu) accepts the OpenAI tool-calling shape, so any HTTP search API plugs in. The wrapper is a 12-line function. This is the shortest path.

The 12 lines

Python
import os, requests

def scavio_search(query: str) -> dict:
    return requests.post(
        'https://api.scavio.dev/api/v1/search',
        headers={'x-api-key': os.environ['SCAVIO_API_KEY']},
        json={'query': query},
    ).json()

# Register with GLM
tools = [{
    'type': 'function',
    'function': {
        'name': 'scavio_search',
        'description': 'Live web search returning organic results plus AI Overview citations',
        'parameters': {
            'type': 'object',
            'properties': {'query': {'type': 'string'}},
            'required': ['query'],
        },
    },
}]

The full GLM call

Python
from zhipuai import ZhipuAI
import json, os

client = ZhipuAI(api_key=os.environ['GLM_API_KEY'])
resp = client.chat.completions.create(
    model='glm-4-plus',
    messages=[{'role': 'user', 'content': 'best mcp practices 2026'}],
    tools=tools,
)

# Handle tool calls
for tc in resp.choices[0].message.tool_calls or []:
    if tc.function.name == 'scavio_search':
        args = json.loads(tc.function.arguments)
        result = scavio_search(args['query'])
        # Feed result back as tool message and re-call.

Why this works on GLM

GLM-4-plus and GLM Coder both implement the OpenAI tool-calling spec. The tools schema is the same. The response shape is the same. The only difference is the SDK import. Any pattern documented for OpenAI tool calling translates directly to GLM with no architectural change.

What changes if you use OpenAI SDK with GLM base_url

The Zhipu team supports OpenAI SDK compatibility with a different base_url. If your codebase already uses the OpenAI SDK, you can point it athttps://open.bigmodel.cn/api/paas/v4/ and the same code works against GLM. The Scavio wrapper is unchanged.

Multi-surface from the same wrapper

Add a second tool for Reddit:

Python
def scavio_reddit(query: str) -> dict:
    return requests.post(
        'https://api.scavio.dev/api/v1/reddit/search',
        headers={'x-api-key': os.environ['SCAVIO_API_KEY']},
        json={'query': query},
    ).json()

tools.append({
    'type': 'function',
    'function': {
        'name': 'scavio_reddit',
        'description': 'Reddit thread search; returns posts with score and comments',
        'parameters': {
            'type': 'object',
            'properties': {'query': {'type': 'string'}},
            'required': ['query'],
        },
    },
})

The cost

Per call: $0.0043 of Scavio plus GLM tokens. For typical agent sessions making 20-50 search calls, that is $0.09-0.22 in Scavio cost. GLM-4-plus token cost on the same session usually runs $0.50-2.00, so the search layer is a fraction.

Honest constraints

For Chinese-market users, Scavio's strength is global English coverage. zh-CN-heavy markets are better served by Bocha or Zhipu's own search integration with localized Chinese indexing. For multilingual or English-primary GLM agents, Scavio is fine.

What changes for MCP-aware GLM clients

Cherry Studio and AnythingLLM both support MCP servers as agent tools. Attach mcp.scavio.dev/mcp once and every GLM session in those clients has the search tool available without per-agent code. For pure SDK-driven GLM apps, the 12-line wrapper above is still the shortest path.