n8nopenclawcost-tracking

n8n + OpenClaw: Track LLM Token Costs in Workflows

Track combined search API and LLM token costs in n8n workflows with OpenClaw integration.

7 min

Tracking the combined cost of LLM tokens and search API calls in a single n8n workflow lets you see the true per-task cost of an AI agent. MachinaOS (the OpenClaw + n8n mashup) converts tokens to work and dollars, but most setups miss the search API spend that often rivals or exceeds the LLM cost.

Why Token Cost Alone Is Misleading

A research agent that runs 5 search queries per task at $0.005 each spends $0.025 on search. If the LLM processes 4,000 tokens of search results at GPT-4o rates ($2.50/1M input), that is $0.01 in LLM cost. The search API cost is 2.5x the LLM cost in this example. Most n8n workflows only track the LLM side and report misleadingly low per-task costs.

Structured vs Raw Results: Token Impact

The format of search results directly affects LLM token consumption. A structured JSON response (title + snippet + URL) averages 300 tokens per search. A full-page markdown scrape averages 2,000-5,000 tokens. Over 10 searches in a workflow, that is 3,000 vs 30,000 tokens -- a $0.0075 vs $0.075 difference at GPT-4o input rates. Choosing a search API that returns structured results is itself a cost optimization.

n8n Workflow Structure with Cost Tracking

Python
# n8n Function node: wrap search API call with cost tracking
# Place this in a Function node after your HTTP Request node

SEARCH_COST_PER_QUERY = 0.005  # Scavio pricing
LLM_INPUT_COST_PER_TOKEN = 2.5 / 1_000_000  # GPT-4o
LLM_OUTPUT_COST_PER_TOKEN = 10.0 / 1_000_000  # GPT-4o

def track_costs(search_results, llm_response):
    # Estimate tokens in search results (rough: 1 token per 4 chars)
    search_text = str(search_results)
    search_tokens = len(search_text) // 4

    costs = {
        "search_api_cost": SEARCH_COST_PER_QUERY,
        "search_tokens_fed_to_llm": search_tokens,
        "llm_input_cost": search_tokens * LLM_INPUT_COST_PER_TOKEN,
        "llm_output_tokens": llm_response.get("usage", {}).get("completion_tokens", 0),
        "llm_output_cost": llm_response.get("usage", {}).get("completion_tokens", 0) * LLM_OUTPUT_COST_PER_TOKEN,
    }
    costs["total_cost"] = (
        costs["search_api_cost"]
        + costs["llm_input_cost"]
        + costs["llm_output_cost"]
    )
    return costs

n8n Workflow Layout

A cost-tracked research workflow in n8n has five nodes:

  • Trigger (webhook or schedule)
  • HTTP Request node: calls Scavio search API
  • Function node: extracts structured results and logs search cost
  • OpenAI node: processes results, returns answer + usage metadata
  • Function node: calculates combined cost and appends to tracking sheet

Logging Costs to a Spreadsheet

Python
import requests, os, csv
from datetime import datetime

def log_task_cost(task_name, search_queries, llm_input_tokens,
                  llm_output_tokens):
    search_cost = search_queries * 0.005
    llm_input_cost = llm_input_tokens * 2.5 / 1_000_000
    llm_output_cost = llm_output_tokens * 10.0 / 1_000_000
    total = search_cost + llm_input_cost + llm_output_cost

    row = [
        datetime.now().isoformat(),
        task_name,
        search_queries,
        f"{search_cost:.4f}",
        llm_input_tokens,
        f"{llm_input_cost:.4f}",
        llm_output_tokens,
        f"{llm_output_cost:.4f}",
        f"{total:.4f}",
    ]

    with open("agent_costs.csv", "a", newline="") as f:
        csv.writer(f).writerow(row)
    return total

# Example: research task with 3 searches
cost = log_task_cost(
    task_name="competitor_analysis",
    search_queries=3,
    llm_input_tokens=2400,
    llm_output_tokens=800,
)
print(f"Task cost: {cost:.4f} USD")

Real Numbers from a Production Workflow

A daily competitor monitoring workflow that tracks 10 keywords:

  • Search API: 10 queries x $0.005 = $0.05/run
  • LLM input: ~3,000 tokens of structured results x $2.50/1M = $0.0075
  • LLM output: ~1,500 tokens summary x $10/1M = $0.015
  • Total per run: $0.0725. Daily: $0.0725. Monthly: $2.18

Without cost tracking, teams estimate this workflow at "$0.02/run" because they only count LLM tokens. The search API adds 69% to the actual cost. Knowing this changes how you budget and how aggressively you optimize search query count versus result quality.