llmcpd
Version v1.1.0Overview
llmcpd turns any llms.txt or llms-full.txt file into a fully-featured Model Context Protocol server. It makes LLM-optimised documentation instantly searchable and fetchable by AI coding agents, with intelligent caching so repeated lookups stay fast.
How it works
Point llmcpd at a documentation source and it indexes the content in the background. AI agents connect to the MCP server and call tools to search, fetch sections, list links, or check indexing status — all without hitting the upstream source on every request.
MCP Tools
| Tool | Purpose |
|---|---|
search |
Full-text search across indexed documentation |
fetch |
Retrieve a specific section or page |
listSections |
Enumerate top-level documentation sections |
listLinks |
List all links within a document |
summary |
Get a concise summary of indexed content |
status |
Check indexing and cache status |
reindex |
Trigger a manual re-crawl |
Key Technologies
- TypeScript — Fully typed Node.js implementation
- Model Context Protocol — Standard interface for AI agent tool integration
- Worker threads — Non-blocking deep crawl of nested markdown files
- Disk cache — ETag and Last-Modified validation to minimise upstream requests
License
MIT

Features
- MCP tools: search, fetch, list sections, list links, summary, status, and reindex
- Background indexing with configurable refresh intervals
- Disk-based caching with ETag and Last-Modified HTTP validation
- Worker thread-based deep crawling of nested markdown files
- Async chunking of full documentation by markdown headings
- Markdown fallback support for HTML pages