Solution

Enforce Runtime Policies on LangChain Tools

LangChain agents can call tools with unexpected inputs, exhaust API credits without limits, or make dangerous system calls. Current governance happens at the prompt level (which th

The Problem

LangChain agents can call tools with unexpected inputs, exhaust API credits without limits, or make dangerous system calls. Current governance happens at the prompt level (which the LLM can ignore) rather than at the runtime level.

The Scavio Solution

Wrap each LangChain tool with a deterministic policy layer that blocks calls before execution. No LLM calls in the enforcement path -- pure code checks. Policies: max calls per session, input validation (no PII patterns), output validation (minimum result count), credit budgets, and blocklists.

Before

Before enforcement, a production agent exhausted its SerpAPI credits in one session by looping search calls. A separate incident: the agent passed a customer email address as a search query, leaking PII to the search provider. Both were caught days later in log review.

After

After adding the ShadowAudit-style wrapper, the agent is capped at 10 search calls per session. PII patterns (email, phone, SSN) are blocked from search queries. Credit budget is tracked per session. The agent attempts 2.3 blocked calls per 100 sessions, all caught before execution.

Who It Is For

LangChain developers deploying agents to production who need deterministic runtime governance over tool calls, credit budgets, and data safety.

Key Benefits

  • Deterministic enforcement -- no LLM in the governance path
  • Block dangerous calls before they execute
  • Track tool usage per session for cost control
  • PII detection prevents data leakage to search providers
  • Works as a wrapper on any LangChain tool

Python Example

Python
import re
from typing import Callable

class ToolPolicy:
    def __init__(self, max_calls: int = 10, block_pii: bool = True):
        self.max_calls = max_calls
        self.block_pii = block_pii
        self.call_count = 0
        self.pii_patterns = [
            re.compile(r'[\w.-]+@[\w.-]+\.\w+'),  # email
            re.compile(r'\b\d{3}[-.]?\d{3}[-.]?\d{4}\b'),  # phone
        ]

    def enforce(self, query: str) -> str | None:
        self.call_count += 1
        if self.call_count > self.max_calls:
            return f'Blocked: max {self.max_calls} calls exceeded'
        if self.block_pii:
            for pattern in self.pii_patterns:
                if pattern.search(query):
                    return 'Blocked: PII detected in query'
        return None

def wrap_tool(fn: Callable, policy: ToolPolicy) -> Callable:
    def wrapped(query: str) -> str:
        block = policy.enforce(query)
        if block:
            return block
        return fn(query)
    return wrapped

# Usage:
policy = ToolPolicy(max_calls=10, block_pii=True)
# search_tool = wrap_tool(original_search, policy)

JavaScript Example

JavaScript
class ToolPolicy {
  constructor(maxCalls = 10) {
    this.maxCalls = maxCalls;
    this.callCount = 0;
    this.piiPatterns = [/[\w.-]+@[\w.-]+\.\w+/, /\b\d{3}[-.]?\d{3}[-.]?\d{4}\b/];
  }
  enforce(query) {
    this.callCount++;
    if (this.callCount > this.maxCalls) return `Blocked: max ${this.maxCalls} calls exceeded`;
    if (this.piiPatterns.some(p => p.test(query))) return 'Blocked: PII detected';
    return null;
  }
}

function wrapTool(fn, policy) {
  return async (query) => {
    const block = policy.enforce(query);
    if (block) return block;
    return fn(query);
  };
}

const policy = new ToolPolicy(10);
// const safeTool = wrapTool(originalSearch, policy);

Platforms Used

Google

Web search with knowledge graph, PAA, and AI overviews

Frequently Asked Questions

LangChain agents can call tools with unexpected inputs, exhaust API credits without limits, or make dangerous system calls. Current governance happens at the prompt level (which the LLM can ignore) rather than at the runtime level.

Wrap each LangChain tool with a deterministic policy layer that blocks calls before execution. No LLM calls in the enforcement path -- pure code checks. Policies: max calls per session, input validation (no PII patterns), output validation (minimum result count), credit budgets, and blocklists.

LangChain developers deploying agents to production who need deterministic runtime governance over tool calls, credit budgets, and data safety.

Yes. Scavio's free tier includes 250 credits per month with no credit card required. That is enough to validate this solution in your workflow.

Enforce Runtime Policies on LangChain Tools

Wrap each LangChain tool with a deterministic policy layer that blocks calls before execution. No LLM calls in the enforcement path -- pure code checks. Policies: max calls per ses