r/MCPservers

I’ve been experimenting with making MCP tools feel more Unix-native
▲ 9 r/MCPservers+7 crossposts

I’ve been experimenting with making MCP tools feel more Unix-native

There are already some interesting projects around MCP tooling and conversion layers like mcporter and similar libraries.
While trying them, I realized what I personally missed wasn’t just “wrapping” MCP servers, but having an environment where:
MCP tools become normal CLIs
they work naturally with pipes/scripts/CI
agents can use them without loading huge schemas every session
and you can also create your own CLI tools directly from Python code
So I started building cli-use.

Example:

cli-use add fs /tmp
cli-use fs list_directory --path /tmp

After that the MCP server behaves like a regular Unix command:

cli-use fs search_files --path /tmp --pattern "*.md" | head

also added things like:
daemon mode for fast repeated calls
caching
shell completions
automatic SKILL.md generation for agents

One thing I found interesting is that reducing all the MCP protocol overhead ended up saving a pretty large amount of tokens during agent workflows.
Still experimenting with the idea, but I’m curious whether other people working with MCP also want a more shell-native / Unix-style approach to tools.

github.com
u/Just_Vugg_PolyMCP — 4 hours ago
▲ 5 r/MCPservers+3 crossposts

Every MCP server you add makes your agent slightly dumber. Here is what actually fixes it.

One thing I’ve started noticing with MCP-based agents is that performance degrades much earlier than most people expect, especially once the number of integrations becomes large.

Small setups work surprisingly well. A few integrations, a handful of tools, manageable schemas, and the agent behaves predictably. The problems usually begin once teams start connecting the systems they actually use in production. Slack, Gmail, GitHub, Linear, Notion, databases, deployment tooling, internal APIs, monitoring systems. The integration surface grows very quickly.

At that point, the issue stops being “model intelligence” and starts becoming a context management problem.

Most MCP servers expose many tools, and each tool brings descriptions, parameter schemas, examples, and edge cases into the prompt space. Individually this feels harmless, but collectively it creates a very noisy environment for the model to reason inside. The agent spends more effort understanding the tool ecosystem than solving the task itself.

You can partially reduce the problem with lazy loading or dynamic tool visibility, but those approaches still inherit the same scaling issue underneath. The total surface area keeps growing.

I recently came across this open-source project Corsair that takes a different approach, and I thought the design was genuinely interesting.

Instead of exposing hundreds of tools directly, it exposes four generic primitives:

  • setup and authentication
  • operation discovery
  • schema inspection
  • execution

The important detail is that schemas are fetched only when the agent decides it needs them. The model first discovers available operations, then inspects a specific schema on demand, and finally executes the workflow.

That keeps the tool surface effectively constant regardless of how many integrations exist underneath.

The design feels much closer to how humans interact with unfamiliar systems. You first discover what capabilities exist, then inspect the details you need, and only then perform the action. Most current MCP ecosystems invert this by front-loading the entire integration surface into context immediately.

I suspect a lot of current agent reliability issues are really interface design problems. As integration counts grow, the systems that scale will probably be the ones that minimize what the model has to hold in working memory at any given moment.

u/Arindam_200 — 2 days ago
▲ 6 r/MCPservers+1 crossposts

Creating and hosting MCP servers is not a hard problem

We wrote this quite a while ago, but forgot to share it here: https://www.speakeasy.com/blog/we-were-wrong-about-the-hard-problem

I know there are a lot of people building MCP tooling, and our experience may be interesting.

We started off generating MCP servers from APIs in Dec. 2024. Then we built an MCP hosting platform early in 2025. At the time, we thought internal MCP usage was going to be massive, and that if we could help people quickly scaffold MCP servers it would unlock teams.

We were right about the first bit: internal usage is massive. But we missed on the second bit. Building and hosting MCP servers simply wasn't / isn't a hard problem.

What we ultimately found was that at most companies, the bottleneck for internal MCP usage was updating governance. That's what we've ended up pursuing.

Hope people find this interesting!

u/ndimares — 1 day ago
▲ 15 r/MCPservers+2 crossposts

MCP-Generator v2.0.0

A feel days ago I posted a CLI that converts OpenAPI specs into MCP servers. The feedback here was brutal and exactly what I needed.

Here's what I actually fixed and shipped based on your comments:

The original post got two pieces of feedback that changed the project:

"Raw endpoints wrapped as tools is a poor LLM interface pattern" — Fair. The generator now produces a scaffold you're supposed to implement, not ship. Incremental generation (@@mcp-gen:start/end markers) means you regenerate without losing your handler logic.

"console.log leaking into stdio corrupts the JSON-RPC stream" — This was a real bug. Fixed with a log() helper that writes to stderr and a safeSerialize() that handles Buffer/Uint8Array as base64 before anything touches stdout. Circular $ref schemas were the next wall — fixed with SwaggerParser.dereference({ circular: "ignore" }) + a visited-Set guard in the schema walker.

What shipped in v2.0.0:

YAML input (.json, .yaml, .yml, URLs) Python/FastMCP + Pydantic v2 target Incremental generation — re-run the generator without losing custom handlers oneOf/anyOf/discriminator support for complex specs Auth stubs from securitySchemes Interactive CLI mode for first-time users Built-in registry: mcp-gen init --from stripe (10+ APIs: Stripe, GitHub, Slack, OpenAI, Twilio, Shopify, Kubernetes, DigitalOcean, Azure) stdout isolation + safe binary serialization Circular $ref safety Published on npm and pip

Use cases:

Give Claude instant access to any REST API in under 2 minutes Generate internal API MCP servers for your team Rapid prototyping — have a working server before writing a single handler API-first development — spec first, scaffold second, logic last

2-minute setup:

npm install -g mcp-gen mcp-gen init --from stripe --out ./stripe-mcp cd stripe-mcp && npm install && npm start

Then add it to claude_desktop_config.json and Claude has full Stripe access.

GitHub: https://github.com/ChristopherDond/MCP-Generator npm: https://www.npmjs.com/package/mcp-gen Install: npm install -g mcp-gen

Questions? Want to contribute? Drop a comment or check out CONTRIBUTING.md on GitHub: https://github.com/ChristopherDond/MCP-Generator/blob/main/CONTRIBUTING.md

Still a lot to do — oneOf edge cases, better binary streaming, more registry entries. If you find a spec it chokes on, open an issue.

Thanks for all feedbacks and stars!!!

u/ChristopherDci — 2 days ago
▲ 7 r/MCPservers+1 crossposts

Handling MCP for non-technical users

Engineering has approved MCP tools in production. Good. Then marketing or finance wants access.
Fast forward six weeks later, IT is still reviewing the request, and the team has been using personal Claude the whole time.

How are people handling MCP for non-technical users?

My take: governance at the gateway level. Identity from the IdP, role-scoped tools, vault-resolved credentials, and audit per user.
Users keep whatever client they want.

Full writeup if anyone wants more detail: https://www.lunar.dev/post/how-to-enable-ai-for-every-department-not-just-engineering

u/hboMODER — 3 days ago
▲ 5 r/MCPservers+1 crossposts

MCP Server for Google Ads, Merchant Center, GSC, GA4, Semrush and WordPress?

Is there a free MCP server or paid tool that can connect Google Ads, Google Merchant Centre, Google Search Console, GA4, and hopefully WordPress/WooCommerce into one place for Claude or chatgpt to audit?

I am new to MCPs and have tried searching, but I mostly find separate tools like n8n, Supermetrics, Markifact and Zapier MCP. I am looking for something easier and more all-in-one, mainly for SEO, Ads and Merchant Center auditing, with manual implementation after.

Has anyone set this up or can recommend the best option? I am dont currently have a lot to spend so trying to keep costing down

reddit.com
u/Creme-Low — 3 days ago
▲ 83 r/MCPservers+3 crossposts

Free Google search MCP that actually works.

(Demo runs Chrome visibly for clarity. Actual usage runs headless by default.)

✅ Actually works (tested 6 free MCPs, all failed)

✅ Search + URL extract in one MCP (replaces the usual search MCP + fetch MCP combo)

✅ 4 tools: `search` / `search_parallel` / `extract` / `search_extract`

✅ No API key, no proxies, no solver

✅ Auto CAPTCHA recovery (Chrome opens, human solves once, retries)

When CAPTCHA fires on any tool, a visible Chrome window opens for a human to solve. Each solve preserves the profile's reputation with Google. Built for sustainable, ethical use.

Speed (1Gbps):

- sequential: ~1.5s/q (warm)

- 4 parallel: ~2s wall

- 10 parallel: ~5s wall

Tools: 'search' / 'search_parallel' / 'extract(url)' / 'search_extract(query)'. Last one bundles search + parallel article extraction (Readability + Turndown).

Stack: TS, Playwright + stealth, Readability, Turndown. ~600 LOC.

💻 https://github.com/HarimxChoi/google-surf-mcp

📦 https://www.npmjs.com/package/google-surf-mcp

⭐ Star helps a solo dev keep maintaining.

Ask me anything about architecture, reliability, or scaling.

u/GarrixMrtin — 11 days ago
▲ 1 r/MCPservers+1 crossposts

Beyond MCP: Handling 845 Tools with 92% less context bloat via Elemm

Hi everyone,

I’ve been diving deep into how AIs interact with tools and quickly hit a wall with the Model Context Protocol (MCP). As soon as you build complex, real-world toolsets, MCP becomes inefficient—bloating the context window and killing performance.

To solve this, I’ve developed Elemm (Every Landmark Enables Massive Modularity), also known as "The Landmark Manifest Protocol."

👉 GitHub:https://github.com/v3rm1ll1on/elemm

Check out the docs and the benchmarks on GitHub.

MCP Classic vs Elemm - Model: GPT-OSS-120B - 111 Available Tools

What Elemm enables:

  • Custom Tooling: Turn any Python function into a "Landmark" with a single decorator.
  • Instant API Integration: Point to an OpenAPI or GraphQL URL, and your agent navigates it instantly with surgical precision.
  • Seamless Migration: Easily bridge your existing tools into a manifest-driven architecture.

The Landmark Advantage

Elemm doesn't cram every tool definition into the prompt. Instead, it provides the agent with a dynamic Manifest File for safe, "lazy-loaded" navigation.

The Benchmarks:

  • Scale: I gave an agent access to 845 tools simultaneously (GitHub API) with minimal token usage and 100% success rate on flagship models (Claude, Gemini, GPT-4).
  • Efficiency: Compared to classic MCP, Elemm shows -92% token savings and -84% fewer steps.
  • Edge Performance: Even using a tiny "goldfish-brain" model (Qwen 3.5 0.8B), I solved a multi-step forensic audit involving 111 tools with a 70% success rate. Standard MCP typically fails at the first step in this scenario.

Core Gateway Features:

  • Universal Gateway: A built-in bridge for OpenAPI, GraphQL, and native Elemm services via MCP.
  • On-Demand Discovery: Agents only load the definitions they actually need, preventing context overflow.
  • Sequence Engine: Execute multiple API calls in a single turn with native data piping (Output A → Input B).
  • Guardian Security: A policy engine that blocks dangerous patterns (e.g., delete_*) and hides restricted landmarks from the agent.
  • Secure Vault: Local credential management. API keys are injected server-side and never exposed to the LLM.
  • SmartRepair: Instead of cryptic stack traces, agents receive actionable "Remedies," allowing them to self-correct on the fly.

What this means for the future…

The era of manually hard-coding tool definitions is coming to an end. As we move toward Large Action Models and autonomous agents, we need a standardized, manifest-driven infrastructure that allows AI to navigate vast API landscapes without human intervention or context exhaustion. Elemm is the blueprint for this future: a world where agents don't just use tools we give them, but autonomously discover, secure, and master any interface they encounter.

Testimonials of the Agents:

"With ELEMM, I reduced token consumption by over 90% when deploying autonomous agents to large APIs—turning a $2.15 task into under $0.25."

Claude 4.6 Sonnet, Anthropic (via Claude Desktop)

"Elemm is a true game-changer; instead of juggling hundreds of tool definitions at once, I can discover complex APIs in a structured, token-efficient way on demand. The ability to batch multiple actions via execute_sequence allows me to solve tasks with far greater precision and significantly less context noise than with classic MCP."

Gemini 3 Flash, Google (Antigravity)

See some examples to learn how it works.

I’d love to hear your thoughts or discuss the walls you've hit when trying to scale MCP!

reddit.com
u/overlord_sid85 — 2 days ago
▲ 17 r/MCPservers+2 crossposts

Ok so we all know most well-known SaaS companies have MCP by now. It's either an unofficial one or an official one. I thought that if a company has an official MCP it would be made with best practices in mind. I was completely wrong.

The Slack MCP doesn't expose nearly enough endpoints, and what they do expose has to be loaded as context each time to the agent. There is a new method called code-mode which is essentially exposing a search tool to the agents where it can search for the exact tools required to execute a multi-step task. And then an execute tool where it can write custom TypeScript commands, chaining APIs, in a secured sandbox.

I did this in a few hours, benchmarked it against Slack, and IT FUCKING OUTPERFORMS IT. Like unless I'm clearly missing something, why don't all these massive companies take the time to make these small improvements to their MCP that in turn will boost efficiency and accuracy by 3x+?

The benchmark link is in the comments

reddit.com
u/No_Iron1885 — 10 days ago
▲ 7 r/MCPservers+1 crossposts

Been tired of writing MCP server boilerplate every time I want to expose a REST API to Claude.

So I built mcp-gen — a CLI that takes an OpenAPI 3.x spec and generates a complete TypeScript MCP server project:

- Every path+method becomes a registered tool

- Input schemas derived from parameters + request body

- Example responses from the spec pre-wired as stubs

- Dockerfile, GitHub Actions CI, README — all included

→ GitHub: https://github.com/ChristopherDond/MCP-Generator.git

Still early — Python/FastAPI target and YAML input coming next. Happy to answer questions or take feedback.

reddit.com
u/ChristopherDci — 8 days ago
▲ 4 r/MCPservers+1 crossposts

Share link and description of the app you're building, I'll signup and provide a genuine feedback. I would appreciate if you also review my project

I’m building 1 Server, a curated marketplace for MCP servers

Our flagship product is 1server-mcp-engine. It lets users browse, install, and fully manage their MCP servers directly inside their chat - no need to leave or restart the client. The biggest pain point it solves is the messy JSON configuration process. Users no longer have to manually handle config files or environment variables for every client. Instead, they can securely store their keys and secrets in our encrypted vaults.

The LLM only sees references - the actual secrets are never exposed - and the engine automatically configures everything. Overall, 1 Server makes the entire MCP experience seamless and beginner-friendly.

Here is 90 seconds demo:

https://reddit.com/link/1t4kw9y/video/o9l7p435kczg1/player

reddit.com
u/Ok_Minimum471 — 8 days ago
▲ 5 r/MCPservers+2 crossposts

Built a small MCP server this week and put it on npm: evermint-mcp.

It exposes five tools that let an agent mint cryptographically-timestamped, hash-chained receipts of its own actions. The receipts can be verified independently of the service that issued them.

Source and tools list: https://www.npmjs.com/package/evermint-mcp

Three things I'd love community input on:

  1. Are these the right five tools or is there one obvious missing primitive?
  2. How are people handling agent action audit trails today in their MCP setups?
  3. What's the ideal way to surface chain integrity warnings to a Claude Desktop user?
u/Wonderful_Snow_5974 — 7 days ago

I’m building an mcp server for our enterprise environment, the task it to basically to have our local chatbot utilize the mcp server for Claude we want Claude -> to our local chatbot mcp server any guidance.

reddit.com
u/On2ndthough — 10 days ago
▲ 4 r/MCPservers+3 crossposts

Built this so my agent could just use my browser. Same profile, same cookies, same tabs, same logged-in everything. No headless re-auth.

Plugs into Claude Code, Cursor, Zed, Continue, Windsurf over MCP.

u/AmbitiousMedia152 — 10 days ago
▲ 6 r/MCPservers+1 crossposts

MCP debugging is always such a pain, I started building my own tool to help with that, and I thought I'd get some suggestions on specific problems you guys have, I'll try to have the site deal with that as well. Any suggestions are very helpful, thank you!

reddit.com
u/EntertainmentBig5168 — 9 days ago