Workflow

Daily Search Grounding for Ollama Assistant

Workflow that feeds daily search results into a local Ollama LLM to ground its knowledge with current data, eliminating hallucinations on time-sensitive topics.

Overview

Local LLMs running on Ollama hallucinate on anything after their training cutoff. This workflow runs daily searches on topics you care about, feeds the results into your local model's context, and stores grounded answers in a local knowledge file. Your assistant stays current without sending private queries to cloud APIs.

Trigger

Daily at 8 AM via cron or task scheduler.

Schedule

Daily 8 AM

Workflow Steps

1

Load Topic List

Read the list of topics you want your local LLM to stay current on from a YAML or JSON config.

2

Search Each Topic via Scavio

Run a Scavio search for each topic. Extract top 5 organic results with titles and snippets.

3

Format as Context Document

Combine search results into a markdown document with source URLs and retrieval timestamps.

4

Feed to Local Ollama Model

Send the context document plus a grounding prompt to your Ollama instance for summarization.

5

Store Grounded Knowledge

Save the grounded summary to a local knowledge base file for future reference.

Python Implementation

Python
import requests, os, json
from datetime import date

API_KEY = os.environ["SCAVIO_API_KEY"]
H = {"x-api-key": API_KEY, "Content-Type": "application/json"}
OLLAMA_URL = "http://localhost:11434/api/generate"

TOPICS = [
    "latest python 3.14 features",
    "ai agent framework updates 2026",
    "new search api providers 2026",
]

def search_topic(topic: str) -> str:
    resp = requests.post(
        "https://api.scavio.dev/api/v1/search",
        headers=H,
        json={"query": topic, "country_code": "us"},
        timeout=15,
    )
    results = resp.json().get("organic_results", [])[:5]
    lines = [f"- {r.get('title', '')}: {r.get('snippet', '')} ({r.get('link', '')})" for r in results]
    return "\n".join(lines)

def ground_with_ollama(topic: str, context: str) -> str:
    prompt = f"Based on these current search results from {date.today()}, summarize the latest on: {topic}\n\nSearch results:\n{context}\n\nProvide a factual summary citing sources."
    resp = requests.post(OLLAMA_URL, json={"model": "llama3", "prompt": prompt, "stream": False}, timeout=60)
    return resp.json().get("response", "")

grounded = {}
for topic in TOPICS:
    context = search_topic(topic)
    summary = ground_with_ollama(topic, context)
    grounded[topic] = {"date": str(date.today()), "summary": summary}
    print(f"Grounded: {topic} ({len(summary)} chars)")

with open("grounded_knowledge.json", "w") as f:
    json.dump(grounded, f, indent=2)
print(f"Saved {len(grounded)} grounded topics")

JavaScript Implementation

JavaScript
const H = {'x-api-key': process.env.SCAVIO_API_KEY, 'Content-Type': 'application/json'};
const fs = await import('fs');

const TOPICS = ['latest python 3.14 features', 'ai agent framework updates 2026', 'new search api providers 2026'];

async function searchTopic(topic) {
  const r = await fetch('https://api.scavio.dev/api/v1/search', {method:'POST', headers:H, body:JSON.stringify({query:topic, country_code:'us'})});
  const results = ((await r.json()).organic_results || []).slice(0,5);
  return results.map(r=>'- '+r.title+': '+r.snippet+' ('+r.link+')').join('\n');
}

async function groundWithOllama(topic, context) {
  const prompt = 'Based on these current search results from '+new Date().toISOString().split('T')[0]+', summarize the latest on: '+topic+'\n\nSearch results:\n'+context+'\n\nProvide a factual summary citing sources.';
  const r = await fetch('http://localhost:11434/api/generate', {method:'POST', headers:{'Content-Type':'application/json'}, body:JSON.stringify({model:'llama3', prompt, stream:false})});
  return (await r.json()).response || '';
}

const grounded = {};
for (const topic of TOPICS) {
  const context = await searchTopic(topic);
  const summary = await groundWithOllama(topic, context);
  grounded[topic] = {date: new Date().toISOString().split('T')[0], summary};
  console.log('Grounded: '+topic+' ('+summary.length+' chars)');
}
fs.writeFileSync('grounded_knowledge.json', JSON.stringify(grounded, null, 2));
console.log('Saved '+Object.keys(grounded).length+' grounded topics');

Platforms Used

Google

Web search with knowledge graph, PAA, and AI overviews

Frequently Asked Questions

Local LLMs running on Ollama hallucinate on anything after their training cutoff. This workflow runs daily searches on topics you care about, feeds the results into your local model's context, and stores grounded answers in a local knowledge file. Your assistant stays current without sending private queries to cloud APIs.

This workflow uses a daily at 8 am via cron or task scheduler.. Daily 8 AM.

This workflow uses the following Scavio platforms: google. Each platform is called via the same unified API endpoint.

Yes. Scavio's free tier includes 250 credits per month with no credit card required. That is enough to test and validate this workflow before scaling it.

Daily Search Grounding for Ollama Assistant

Workflow that feeds daily search results into a local Ollama LLM to ground its knowledge with current data, eliminating hallucinations on time-sensitive topics.