Overview
Local LLMs running on-premise lack access to current web data. This workflow fetches fresh search results daily via Scavio and stores them as structured context files that your local LLM can reference during inference, grounding responses in real-world data.
Trigger
Daily cron at 6:00 AM before the first user query of the day.
Schedule
On agent query
Workflow Steps
Define Daily Search Topics
Maintain a list of topics your local LLM needs current data on. These could be industry news, competitor updates, pricing changes, or market trends.
Fetch Fresh Search Results
Query Scavio for each topic and collect the top results with titles, snippets, and source URLs.
Format as LLM Context Files
Write the search results to structured text files that can be loaded into the local LLM's context window or RAG pipeline.
Inject Context into LLM System Prompt
Load the daily context files and prepend them to the LLM system prompt so all responses are grounded in fresh data.
Python Implementation
import requests, os
H = {'x-api-key': os.environ['SCAVIO_API_KEY'], 'Content-Type': 'application/json'}
data = requests.post('https://api.scavio.dev/api/v1/search', headers=H, json={'query': 'example', 'country_code': 'us'}).json()
print(len(data.get('organic_results', [])))JavaScript Implementation
const H = {'x-api-key': process.env.SCAVIO_API_KEY, 'Content-Type': 'application/json'};
fetch('https://api.scavio.dev/api/v1/search', {method: 'POST', headers: H, body: JSON.stringify({query: 'example', country_code: 'us'})}).then(r => r.json()).then(d => console.log(d.organic_results?.length));Platforms Used
Web search with knowledge graph, PAA, and AI overviews