agent-memorywikiscavio

Multi-Agent Memory Wiki Pattern (2026)

Wikis beat vector stores for agent memory: entity-centric facts with verification timestamps. Periodic re-search via Scavio detects stale facts and auto-patches entries.

5 min read

Multiple Reddit threads this week converged on the same problem: AI agents forget everything between sessions. The solutions proposed ranged from vector stores to SQLite to in-memory caches. The pattern that keeps emerging is a wiki: structured entries with entity names, facts, and verification timestamps.

Why wikis beat vector stores for agent memory

Vector stores are great for semantic search over unstructured documents. Agent memory is not unstructured — it is entity-centric. An agent needs to know: "Tavily's pricing is $30/mo, last verified 2026-05-01." That is a structured fact, not a document chunk. A simple key-value store with timestamps handles this better than an embedding pipeline.

The wiki entry schema

Python
# Agent memory as structured wiki entries
memory = {
    "tavily": {
        "facts": [
            {"claim": "Basic plan is $30/mo for 4,000 credits",
             "verified": "2026-05-03", "source": "tavily.com/pricing"},
            {"claim": "Acquired by Nebius in early 2026",
             "verified": "2026-05-03", "source": "SERP snippets"},
        ]
    },
    "serpapi": {
        "facts": [
            {"claim": "Pricing starts at $75/mo for 5,000 searches",
             "verified": "2026-05-03", "source": "serpapi.com/pricing"},
            {"claim": "Google filed DMCA lawsuit Dec 2025, hearing May 19 2026",
             "verified": "2026-05-03", "source": "court filing"},
        ]
    }
}

Periodic re-verification

Python
import requests, os
from datetime import datetime, timedelta

key = os.environ["SCAVIO_API_KEY"]

def verify_stale_facts(memory: dict, max_age_days: int = 7):
    cutoff = datetime.now() - timedelta(days=max_age_days)
    for entity, data in memory.items():
        for fact in data["facts"]:
            if datetime.strptime(fact["verified"], "%Y-%m-%d") < cutoff:
                resp = requests.post("https://api.scavio.dev/api/v1/search",
                    headers={"x-api-key": key},
                    json={"query": f"{entity} {fact['claim'][:30]}",
                          "platform": "google", "limit": 3})
                snippets = [r["snippet"] for r in resp.json().get("results", [])]
                # LLM check: has the fact changed?
                print(f"STALE: {entity} - {fact['claim'][:50]}...")

Multi-agent shared memory

When multiple agents share a wiki, concurrent writes conflict. The simplest solution: each agent writes to its own namespace, and a merge step reconciles conflicts by preferring the most recently verified fact. This is eventual consistency for agent knowledge — not perfect, but sufficient for most use cases.

The honest limitation

Wiki-style memory works for entity-centric facts. It does not handle nuanced context, conversational history, or preference learning well. For those, you still need either a vector store (semantic retrieval) or a conversation log (full history). The wiki handles the "what do I know about X?" case, not the "what did the user prefer last time?" case.