agentsjavascriptdeveloper-tools

An AI Agent That Tells You If an npm Package Is Worth Using

Building an AI agent that evaluates npm packages using search data, GitHub stats, and community sentiment from Reddit.

8 min read

Every developer has asked the same question: should I use this npm package or write it myself? The answer usually involves checking GitHub stars, weekly downloads, last commit date, and open issues. But that only tells part of the story. Real evaluation requires searching for community sentiment, known vulnerabilities, and whether maintainers actually respond to issues.

This post shows how to build an AI agent that evaluates npm packages by combining search data, GitHub stats, and community discussion -- all through tool calls to a search API.

What a Package Evaluator Should Check

A thorough evaluation goes beyond npm's download count. Your agent should gather:

  • GitHub activity: stars, forks, open issues, last commit
  • Community sentiment: Reddit threads, blog posts, Stack Overflow answers
  • Security: known CVEs, audit warnings, dependency chain risks
  • Alternatives: what else exists and how they compare
  • Maintenance signals: release frequency, issue response time

An AI agent can gather all of this in parallel using search API calls, then synthesize the findings into a recommendation.

Architecture

The agent is a simple loop: receive a package name, run several searches, collect structured results, and produce a verdict. Here is the tool definition:

const tools = [
  {
    type: "function",
    function: {
      name: "search_google",
      description: "Search Google for information about an npm package",
      parameters: {
        type: "object",
        properties: {
          query: { type: "string" }
        },
        required: ["query"]
      }
    }
  }
];

When the model calls search_google, your code executes the search via Scavio and returns the results:

async function executeSearch(query: string) {
  const res = await fetch("https://api.scavio.dev/api/v1/search", {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      "x-api-key": process.env.SCAVIO_API_KEY
    },
    body: JSON.stringify({ platform: "google", query, mode: "full" })
  });
  return res.json();
}

The Evaluation Prompt

The system prompt tells the agent exactly what to investigate:

Text
You are an npm package evaluator. Given a package name, run these searches:

1. "[package] npm" - get basic info and download trends
2. "[package] github issues" - check maintenance health
3. "[package] vs alternative" - find competitors
4. "[package] security vulnerability CVE" - check for known issues
5. "[package] reddit review" - find community opinions

After gathering data, produce a report with:
- Summary (1-2 sentences)
- Maintenance health (active/stale/abandoned)
- Community sentiment (positive/mixed/negative)
- Known risks
- Recommended alternatives if any
- Final verdict: USE / USE WITH CAUTION / AVOID

Handling the Tool Call Loop

The agent loop processes tool calls until the model produces a final text response:

async function evaluate(packageName: string) {
  const messages = [
    { role: "system", content: SYSTEM_PROMPT },
    { role: "user", content: `Evaluate: ${packageName}` }
  ];

  while (true) {
    const response = await openai.chat.completions.create({
      model: "gpt-4o",
      messages,
      tools
    });
    const choice = response.choices[0];
    if (choice.finish_reason === "stop") {
      return choice.message.content;
    }
    for (const call of choice.message.tool_calls ?? []) {
      const result = await executeSearch(
        JSON.parse(call.function.arguments).query
      );
      messages.push(choice.message);
      messages.push({
        role: "tool",
        tool_call_id: call.id,
        content: JSON.stringify(result.organic?.slice(0, 5))
      });
    }
  }
}

What You Learn From Real Data

Running this agent against popular packages reveals patterns that star counts alone miss. A package with 10,000 stars but no commits in 8 months is a risk. A package with 500 stars but active maintenance and positive Reddit threads is often the better choice.

The search results also surface warnings you would not find on the package's own README -- blog posts about migration nightmares, Stack Overflow threads about edge cases, and CVE reports buried in security databases.

Going Further

You can extend this agent to check YouTube for tutorial coverage using Scavio's search_youtube endpoint. The same pattern works for evaluating any dependency -- Python packages, Go modules, Rust crates. Every dependency decision is a bet on someone else's code. An agent that does the research for you makes that bet more informed.