Tutorial

How to Build a Perplexity-Style Answer Engine with Next.js and Scavio

Build a Perplexity-style answer engine using Next.js and the Scavio search API. Stream cited answers grounded in real-time web results.

Perplexity AI popularized the pattern of answering questions with cited web sources in real time. The core architecture is straightforward: search the web for the user question, feed the results as context to an LLM, and stream the response with source citations. This tutorial builds a minimal Perplexity clone using Next.js for the frontend, Scavio for real-time search, and the OpenAI streaming API for the answer. The result is a deployable web app that answers questions with live, cited sources.

Prerequisites

  • Node.js 18 or higher
  • npx create-next-app knowledge
  • A Scavio API key
  • An OpenAI API key

Walkthrough

Step 1: Create the API route for search and answer

Build a Next.js API route that fetches Scavio results, formats them as context, and streams a GPT response back to the client.

// app/api/answer/route.ts
import { NextRequest } from "next/server";
import OpenAI from "openai";

const openai = new OpenAI();
const SCAVIO_KEY = process.env.SCAVIO_API_KEY!;

async function fetchSources(query: string) {
  const res = await fetch("https://api.scavio.dev/api/v1/search", {
    method: "POST",
    headers: { "x-api-key": SCAVIO_KEY, "Content-Type": "application/json" },
    body: JSON.stringify({ query, country_code: "us" })
  });
  const data = await res.json();
  return (data.organic_results || []).slice(0, 5);
}

export async function POST(req: NextRequest) {
  const { question } = await req.json();
  const sources = await fetchSources(question);
  const context = sources.map((s: any, i: number) => `[${i+1}] ${s.title}\n${s.snippet}\n${s.link}`).join("\n\n");
  const stream = await openai.chat.completions.create({
    model: "gpt-4o", stream: true,
    messages: [
      { role: "system", content: "Answer concisely using the sources. Cite with [n]." },
      { role: "user", content: `Sources:\n${context}\n\nQuestion: ${question}` }
    ]
  });
  const encoder = new TextEncoder();
  const readable = new ReadableStream({
    async start(controller) {
      controller.enqueue(encoder.encode(JSON.stringify({ sources }) + "\n"));
      for await (const chunk of stream) {
        const text = chunk.choices[0]?.delta?.content || "";
        if (text) controller.enqueue(encoder.encode(text));
      }
      controller.close();
    }
  });
  return new Response(readable, { headers: { "Content-Type": "text/plain" } });
}

Step 2: Build the search UI component

Create a simple React component with a search input that streams the answer and displays source cards.

// app/page.tsx
"use client";
import { useState } from "react";

export default function Home() {
  const [question, setQuestion] = useState("");
  const [answer, setAnswer] = useState("");
  const [sources, setSources] = useState<any[]>([]);

  async function handleSearch() {
    setAnswer("");
    setSources([]);
    const res = await fetch("/api/answer", {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({ question })
    });
    const reader = res.body!.getReader();
    const decoder = new TextDecoder();
    let first = true;
    let buffer = "";
    while (true) {
      const { done, value } = await reader.read();
      if (done) break;
      const text = decoder.decode(value);
      if (first) {
        const newline = text.indexOf("\n");
        const meta = JSON.parse(text.slice(0, newline));
        setSources(meta.sources);
        buffer = text.slice(newline + 1);
        first = false;
      } else {
        buffer += text;
      }
      setAnswer(buffer);
    }
  }

  return (
    <main>
      <input value={question} onChange={e => setQuestion(e.target.value)} placeholder="Ask anything..." />
      <button onClick={handleSearch}>Search</button>
      <div>{answer}</div>
      <div>{sources.map((s, i) => <a key={i} href={s.link}>[{i+1}] {s.title}</a>)}</div>
    </main>
  );
}

Step 3: Set environment variables

Configure the Scavio and OpenAI API keys in your .env.local file.

Bash
SCAVIO_API_KEY=your_scavio_api_key
OPENAI_API_KEY=your_openai_api_key

Python Example

Python
import os
import requests
from openai import OpenAI

SCAVIO_KEY = os.environ.get("SCAVIO_API_KEY", "your_scavio_api_key")
client = OpenAI()

def search(question: str) -> list[dict]:
    r = requests.post("https://api.scavio.dev/api/v1/search",
                      headers={"x-api-key": SCAVIO_KEY},
                      json={"query": question, "country_code": "us"})
    r.raise_for_status()
    return r.json().get("organic_results", [])[:5]

def answer(question: str) -> None:
    sources = search(question)
    ctx = "\n\n".join(f"[{i+1}] {s['title']}\n{s.get('snippet', '')}\n{s['link']}" for i, s in enumerate(sources))
    stream = client.chat.completions.create(
        model="gpt-4o", stream=True,
        messages=[
            {"role": "system", "content": "Answer using sources. Cite with [n]."},
            {"role": "user", "content": f"Sources:\n{ctx}\n\nQuestion: {question}"}
        ])
    for chunk in stream:
        text = chunk.choices[0].delta.content or ""
        print(text, end="", flush=True)
    print()

if __name__ == "__main__":
    answer("What is the state of AI agents in 2026?")

JavaScript Example

JavaScript
const SCAVIO_KEY = process.env.SCAVIO_API_KEY || "your_scavio_api_key";
const { OpenAI } = require("openai");
const client = new OpenAI();

async function search(question) {
  const res = await fetch("https://api.scavio.dev/api/v1/search", {
    method: "POST",
    headers: { "x-api-key": SCAVIO_KEY, "Content-Type": "application/json" },
    body: JSON.stringify({ query: question, country_code: "us" })
  });
  const data = await res.json();
  return (data.organic_results || []).slice(0, 5);
}

async function answer(question) {
  const sources = await search(question);
  const ctx = sources.map((s, i) => `[${i+1}] ${s.title}\n${s.snippet || ""}\n${s.link}`).join("\n\n");
  const stream = await client.chat.completions.create({
    model: "gpt-4o", stream: true,
    messages: [
      { role: "system", content: "Answer using sources. Cite with [n]." },
      { role: "user", content: `Sources:\n${ctx}\n\nQuestion: ${question}` }
    ]
  });
  for await (const chunk of stream) {
    process.stdout.write(chunk.choices[0]?.delta?.content || "");
  }
}

answer("State of AI agents 2026").catch(console.error);

Expected Output

JSON
AI agents in 2026 have matured significantly. According to recent reports, the market
has shifted from experimental chatbots to production-grade autonomous systems [1].
Major frameworks like LangGraph and CrewAI now support stateful, multi-step workflows
out of the box [2]. Enterprise adoption has accelerated, with 40% of Fortune 500
companies deploying at least one agent-based system [3].

Sources:
[1] https://example.com/ai-agents-2026
[2] https://example.com/agent-frameworks
[3] https://example.com/enterprise-agents

Related Tutorials

Frequently Asked Questions

Most developers complete this tutorial in 15 to 30 minutes. You will need a Scavio API key (free tier works) and a working Python or JavaScript environment.

Node.js 18 or higher. npx create-next-app knowledge. A Scavio API key. An OpenAI API key. A Scavio API key gives you 500 free credits per month.

Yes. The free tier includes 500 credits per month, which is more than enough to complete this tutorial and prototype a working solution.

Scavio has a native LangChain package (langchain-scavio), an MCP server, and a plain REST API that works with any HTTP client. This tutorial uses the raw REST API, but you can adapt to your framework of choice.

Start Building

Build a Perplexity-style answer engine using Next.js and the Scavio search API. Stream cited answers grounded in real-time web results.