MCP Servers That Turn Docs into Claude Code Skills
How MCP servers transform documentation URLs into Claude Code skills -- the pattern and why it matters for developer tools.
A pattern is emerging in the Claude Code ecosystem: developers point their AI assistant at a documentation URL through an MCP server, and the assistant gains a new skill. No custom tool definitions, no prompt engineering, no code. Just a URL that the assistant can read and act on. This pattern is reshaping how developers extend AI coding assistants.
The Pattern
An MCP server exposes tools that an AI assistant can call. Most MCP servers provide functional tools -- search, create, update, delete. But a growing number serve a different purpose: they expose documentation as context that the assistant can reference while working.
The simplest version is a fetch tool that retrieves a URL and returns its content. Claude Code already has web access through MCP, so the pattern often looks like this: the developer adds an MCP server, the server provides a tool that returns documentation content, and Claude Code uses that content to understand a library, API, or framework it was not trained on.
# Add a docs-serving MCP server to Claude Code
claude mcp add --transport http my-docs-server https://docs-mcp.example.com/mcpWhy Documentation URLs Matter
LLMs have training cutoffs. Libraries release new versions. APIs change their schemas. When Claude Code encounters a library it does not know well, it either hallucinates the API or asks you for help. An MCP server that serves current documentation solves this by giving the assistant access to the latest reference material on demand.
This is different from RAG. RAG systems retrieve chunks of documentation based on semantic similarity to a query. An MCP docs tool gives the assistant the ability to request specific pages or sections of documentation -- more like looking up a reference manual than searching through a knowledge base.
Building a Docs MCP Server
A minimal docs MCP server needs one tool: a function that takes a documentation path and returns its content. Here is the basic structure using the MCP SDK:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { z } from "zod";
const server = new McpServer({ name: "docs-server", version: "1.0.0" });
server.tool(
"get_docs",
"Retrieve documentation for a specific topic",
{ topic: z.string().describe("The documentation topic or path") },
async ({ topic }) => {
const content = await fetchDocsContent(topic);
return {
content: [{ type: "text", text: content }],
};
}
);The fetchDocsContent function can pull from a static site, a CMS, a Git repository, or any other source. The MCP server acts as a bridge between the documentation source and the AI assistant.
Real-World Examples
Several MCP servers already follow this pattern in production:
- Framework documentation servers: Serve the latest docs for frameworks like Next.js, SvelteKit, or Astro so the assistant uses current APIs instead of outdated training data
- API reference servers: Expose OpenAPI specs or API reference pages so the assistant can generate correct API calls without guessing at parameter names
- Internal knowledge bases: Companies run private MCP servers that expose internal documentation, style guides, and architecture decisions to their engineering team's AI assistants
- Search API documentation: Services like Scavio expose their API docs through MCP, letting the assistant learn the available tools and parameters without the developer explaining them
The Skill Emergence
What makes this pattern powerful is that the assistant does not just read the documentation -- it synthesizes it into a skill. After reading Scavio's API docs through an MCP tool, Claude Code can write correct API calls, handle error responses, and use advanced features like filters and pagination without the developer providing examples.
This is emergent behavior. The MCP server does not teach the assistant a skill directly. It provides raw documentation that the assistant converts into working knowledge through its own reasoning. The quality of the documentation directly determines the quality of the skill.
Limitations and Trade-offs
Documentation-as-MCP has real constraints:
- Each documentation fetch consumes tokens from the assistant's context window -- large docs may not fit
- The assistant decides when to fetch docs, which means it sometimes does not look up information it should
- Documentation quality varies -- poorly written docs produce poorly informed assistants
- There is no caching standard in MCP yet, so repeated fetches of the same page waste tokens and latency
Despite these limitations, the pattern works well enough that it is becoming a default part of the Claude Code workflow for teams working with rapidly evolving APIs and libraries.