LLMs are trained on a snapshot of the web: APIs change, libraries update, and models confidently generate code that no longer works. The problem gets worse with newer or more niche devtools.
Some platforms are solving this by publishing llms.txt - AI-friendly versions of their docs that are always up-to-date. The catch is that there there's no good way for agents to search across or within them.
So I built Statespace, the first search engine for llms.txt sites. It fetches relevant links from millions of pages, leaving the context retrieval up to your agent. And it's 100% free to use via web, SDK, MCP, or CLI.
You can run plain queries to search across all docs:
mcp server setup
vector database embeddings
oauth2 token refresh
Or scope your queries to a specific site with site: query
stripe: webhook verification
mistral.ai: function calling
docs.supabase.com: edge functions auth
Quotes work like Google for exact phrases:
"context window limit"
vector database "semantic search"
stripe: "webhook signature verification"