r/mcp

I built & publicly host a handful of MCP servers - free to use, no API keys/auth needed
▲ 44 r/ClaudeAI+1 crossposts

I built & publicly host a handful of MCP servers - free to use, no API keys/auth needed

Hi there, I wanted to share something I've been working on. I've built a collection of MCP servers that are all hosted and publicly available. You can point any MCP client (like Claude/ChatGPT) at them and start using them immediately - no setup/install, no keys/auth, nothing running locally.

Here's what's live right now:

Health and science:

Academic:

Government and public data:

Utility:

To use any of these, just add the /mcp URL as a remote MCP server in your client. In Claude Desktop, that looks like this:

Adding a Remote MCP server to Claude

They're all built on @cyanheads/mcp-ts-core, an agent-native TypeScript framework I maintain for building MCP servers. If you want to build your own, npx @cyanheads/mcp-ts-core init my-mcp-server will scaffold a project for you. CD into that new directory and start up your coding agent; describe what you want to build and they can take it from there.

I have a bunch of other MCP servers too (local tools, git operations, Obsidian, etc.) that aren't hosted but work great as stdio servers. Full list is on my GitHub profile.

Happy to answer questions about any of them. Thanks for reading!

reddit.com
u/cyanheads — 18 hours ago
▲ 49 r/mcp

Why is everything Python and Typescript?

I understand the typescript for UI developers like MCP Apps. The designers are used to building webpages. The backend of everything is in Python, basically a prototyping language. Is it that CS classes now teach Python and people are used to it. Why not faster languages like Go or Rust? With the event of agent assisted coding, you can write in any language. Why not choose one that is faster and uses less resources? I mean Rust is about twice as fast as Go and up to 60 times as fast as Python. I just keep on thinking that if I could reduce my cloud cost by 1.5 orders of magnitude it would make sense.

reddit.com
u/Miserable-Ball-6491 — 20 hours ago
▲ 47 r/mcp

MCP gives me a portable tool layer. I'm still not sure what the right portable memory layer is.

​

One thing I've been running into while building agent systems is that MCP solves one portability problem really well, but not the whole problem.

It gives me a much cleaner way to move the capability surface around. Tools, servers, transports, app integrations. Great.

What it does not automatically solve is this:

where should the Agent's learned context live if I want to move it to another machine without copying an entire pile of runtime residue along with it?

The split that has started feeling sane to me is to stop calling everything "memory."

In the repo I'm building, instructions live in files like AGENTS/md and workspace.yaml. Runtime-owned execution truth stays in state/runtime.db. Durable memory bodies live as readable markdown under memory/.

That distinction ended up mattering more than I expected.

Once MCP owns the tool surface, I can ask a cleaner question about the context surface.

What should be durable?

What should be resumable but not portable?

What should stay machine-local?

My bias now is:

continuity is one job.

durable recall is another.

If those get flattened together, you can have a beautiful MCP setup and still have an Agent that only really works on the original machine.

Curious how people here are drawing that line.

Once MCP handles tools, where do you want the learned context to live?

I'm keeping the repo link out of the body because I'd rather not have this get weirdly removed for reading like a promo post. If anyone wants to go deeper, I'll put the repo in the comments along with the broader technical framing I'm wrestling with: where policy should live, what should stay runtime-owned, why continuity and durable memory should be different layers, and what should or should not move across machines.

reddit.com
u/Fearless_Pea2761 — 22 hours ago
Fetch MCP Server – Enables LLMs to fetch and process web content by converting HTML into markdown for easier consumption. It supports chunked reading via pagination and provides configuration options for robots.txt compliance and proxy usage.
▲ 1 r/mcp

Fetch MCP Server – Enables LLMs to fetch and process web content by converting HTML into markdown for easier consumption. It supports chunked reading via pagination and provides configuration options for robots.txt compliance and proxy usage.

glama.ai
u/modelcontextprotocol — 2 hours ago
PaKi Curator — Visual Medicine Art Catalog – 300 contemplative moving art works by César Yagüe. Search, browse, get recommendations.
▲ 1 r/mcp

PaKi Curator — Visual Medicine Art Catalog – 300 contemplative moving art works by César Yagüe. Search, browse, get recommendations.

glama.ai
u/modelcontextprotocol — 2 hours ago
▲ 1 r/mcp

I built an MCP bridge that exposes Android Studio’s entire toolchain to Claude Code, Copilot, and all MCP clients

Android Studio’s Gemini plugin has everything you’d need to work with Android projects: Gradle sync, ADB access, live device interaction, Compose preview rendering, docs search, Maven lookups. It’s all there but locked inside Studio.

So I built an MCP bridge that exposes it all.

Now Claude Code, Copilot, OpenCode, Kilo, any MCP client gets native Android context.

Tested it on real workflows with zero manual intervention.

If you’re using MCP with Android development, try it out and let me know what works, what breaks, what’s missing. This is a working proof of concept that MCP can bridge entire ecosystems.

Repo Link

u/Tek-Sapien — 3 hours ago
Less than 12 hours after releasing Tool Definition Quality Score (TDQS) framework, we are already seeing servers passing with A scores!
▲ 11 r/mcp

Less than 12 hours after releasing Tool Definition Quality Score (TDQS) framework, we are already seeing servers passing with A scores!

This might be the first server that got A scores on every tool.

https://glama.ai/mcp/servers/shahabazdev/inxmail-mcp/score

Scoring A means that the tool definition has...

  • a clear purpose
  • usage guidelines
  • explains tool behavior
  • parameters are semantic
  • its concise
  • and it is contextually complete

If more servers adopt Tool Definition Quality Score, we will see MCP ecosystem rapidly maturing.

u/punkpeye — 16 hours ago
YouTube Insights MCP Server – Enables extraction of transcripts, keyword-based video search with metadata retrieval, and channel information discovery from YouTube videos through natural language interaction.
▲ 3 r/mcp

YouTube Insights MCP Server – Enables extraction of transcripts, keyword-based video search with metadata retrieval, and channel information discovery from YouTube videos through natural language interaction.

glama.ai
u/modelcontextprotocol — 8 hours ago
I built an MCP server that lets you understand resume weakness
▲ 1 r/mcp

I built an MCP server that lets you understand resume weakness

Hi:

This is a AI-powered resume parser & full Applicant Tracking System with 21 MCP tools. Parse PDFs, extract skills, detect patterns, score candidates, and manage a complete hiring pipeline — all from your AI assistant, no manual work required.

This is what the frontend look like, a full applicant tracker frontend, you can test any module you want

ai-hr-management-toolkit.vercel.app

https://preview.redd.it/3c55yxpfjbtg1.png?width=1889&format=png&auto=webp&s=703af52c39df9edd8708a3da93400a624256f3be

To bind this app into your local AI app like claude or copilot. You can use the MCP server to ask questions:

https://preview.redd.it/fmri2vitjbtg1.png?width=1101&format=png&auto=webp&s=70f35e1f382b242abad1169bb086aaab79093d29

https://preview.redd.it/tp95712ujbtg1.png?width=1094&format=png&auto=webp&s=f6d2703efddc7ccc7c8a3b89f6a4507d138bf8bc

it will give you a full idea about your overall strength, weakness of your resume. You can also manage your job application info in here.

Try this server at : https://glama.ai/mcp/servers/XJTLUmedia/AI-HR-Management-Toolkit

reddit.com
u/ApprehensiveSkin7975 — 4 hours ago
Opendata Ademe – Access to ADEME datasets (French ecological transition agency) - data on energy, environment, waste, transport, housing
▲ 2 r/mcp

Opendata Ademe – Access to ADEME datasets (French ecological transition agency) - data on energy, environment, waste, transport, housing

glama.ai
u/modelcontextprotocol — 8 hours ago
Python Dependency Manager Companion – Provides up-to-date Python package manager commands by cross-referencing official pip, poetry, uv, and conda documentation with automatic weekly updates.
▲ 1 r/mcp

Python Dependency Manager Companion – Provides up-to-date Python package manager commands by cross-referencing official pip, poetry, uv, and conda documentation with automatic weekly updates.

glama.ai
u/modelcontextprotocol — 5 hours ago
pubchem-mcp-server – MCP server for the PubChem chemical database. Search compounds, fetch properties, safety data, bioactivity, cross-references, and entity summaries. STDIO & Streamable HTTP
▲ 3 r/mcp

pubchem-mcp-server – MCP server for the PubChem chemical database. Search compounds, fetch properties, safety data, bioactivity, cross-references, and entity summaries. STDIO & Streamable HTTP

glama.ai
u/modelcontextprotocol — 11 hours ago
▲ 6 r/mcp+1 crossposts

I watched people investigate DeFi projects using my API. Here's the pattern scammers can't fake.

I run a data API that includes DNS lookups, email validation, and web scraping. Last week I looked at how people were actually using it, and one pattern stood out: DeFi project investigation.

A group of users (6+ IPs, 20+ calls within minutes of each other, probably a team or multi-agent workflow) ran a systematic check on several projects. OceanSwap, NoviFi, NexusChain, a few others. Their method was consistent:

  1. Check if the project domain exists (DNS lookup)
  2. Check domain variants: .com, .io, .finance, .xyz
  3. Validate team email addresses — do the domains actually resolve?
  4. Scrape the website content if it exists

OceanSwap: four domain variants checked, all non-existent. That's about as clear a rug-pull signal as you'll get before money is involved.

What I found interesting is what they didn't check. None of them ran sanctions screening, company registration lookups, or beneficial ownership checks. These are the signals that separate a sophisticated scam from an amateur one. A real project has a registered entity somewhere. It has directors whose names appear in a company registry. A fake project has a nice website and a Telegram group.

The pattern that's hardest to fake:

  • Registered entity: Does a company actually exist behind this project? Check the relevant country's company registry (Companies House for UK, Brreg for Norway, etc.)
  • Beneficial ownership: Who actually controls the entity? Not who's on the About page. Who has significant control according to the legal registry.
  • Sanctions: Are any associated individuals or entities on OFAC, EU, or UN sanctions lists?
  • Domain age + registration: A domain registered 3 weeks ago promoting a "established DeFi protocol" is a signal.

A website can be faked in an afternoon. A Companies House registration with directors, a registered address, and PSC filings takes actual identity exposure. Scammers avoid that.

The DNS + email + scrape approach works for catching the obvious fakes (non-existent domains, broken email addresses). But for projects that have a working website and a polished frontend, you need to go one layer deeper into corporate registries and sanctions data.

This is what I'm building tooling around if anyone's curious. An API that bundles these checks into single calls. But even without that, the registry data is publicly available. Companies House has a free API. OFAC publishes their sanctions list as a downloadable file. The hard part is stitching it together and keeping it current.

What does your due diligence process look like before you put money into a new project? Curious whether people are checking registries or mostly relying on community reputation and social signals.

reddit.com
u/Petter-Strale — 20 hours ago
▲ 6 r/mcp

I built an MCP server that lets Claude edit images through Photopea -- design posters, edit photos, apply filters, all from the terminal

I've been working on an MCP server that connects Claude (or any MCP client) to Photopea, a free browser-based alternative to Photoshop.

What it does: You describe what you want in natural language, and Claude executes it in Photopea -- creating documents, adding text, placing images, applying filters and effects, exporting files. 34 tools in total.

Example prompt: > "Create a 1500x1500 album cover with a dark purple gradient background, add noise texture, apply motion blur for light streaks, load a custom Google Font, add the title '1337 DESIGN' with a glow effect, and export as PNG"

How it works: Claude sends commands via MCP -> the server translates them to Photopea's JavaScript API -> executes via WebSocket in your browser -> you see the result live in Photopea.

Install (one command):

claude mcp add -s user photopea -- npx -y photopea-mcp-server

Works with Claude Code, Claude Desktop, Cursor, VS Code, and Windsurf.

Links:

Open source, MIT licensed. Would love feedback.

u/Weird-Celebration914 — 19 hours ago
▲ 1 r/mcp

LeafEngines Cloners: What are You building?

🌟 THE DATA (Last 14 Days):

GitHub Metrics That Tell a Story:

```

1,106 clones (79/day)

98 unique cloners (7/day)

192 page views (14/day)

48 unique visitors (3/day)

```

🌟 The Killer Stat: 576% clone-to-view ratio

- Industry average: 10-30%

- LeafEngines: 576% ( 19x higher )

- What this means: Developers aren't just browsing - they're INTEGRATING

🌟

Traffic Sources (12,439 total Reddit views):

- r/MCP: 32.1% (4,000+ views) ← Our technical home

- r/ClaudeCode: 16.3% (2,000+ views) ← Claude ecosystem

- r/AgriTech: 14.6% (1,800+ views) ← Domain experts

- r/OpenSource: 6.8% (800+ views) ← OSS community

Global Reach:

- >50% of traffic from outside US/Germany/India/Canada

- International developer base from day one

🌟 THE CHALLENGE:

We have the metrics. Now we want YOUR stories.

Share what you're building with LeafEngines, get 30 days Pro FREE.

Why This Matters:

- 576% clone ratio = You're using it programmatically

- 98 unique cloners = Real developer community

- Global distribution = Solving international problems

- MCP + AgriTech crossover = Unique technical niche

🌟 What Counts:

- Agricultural automation projects

- MCP server integrations

- Claude skill enhancements

- Research/ academic work

- Commercial applications

- Even just ideas/plans!

🌟 HOW TO PARTICIPATE:

  1. Comment below with your use case
    
  2. OR  create a GitHub issue/discussion
    
  3. OR  tweet with   LeafEnginesChallenge
    

Submission Template (copy-paste):

```

Project: [Name]

What I'm Building: [2-3 sentences]

LeafEngines Usage: [How you use our tools]

Tech Stack: [Languages/frameworks]

Goals: [What you hope to achieve]

```

🌟WHAT WE SEE IN THE DATA:

Pattern 1: Programmatic Adoption

576% clone ratio = CI/CD pipelines, automation scripts, package dependencies

Pattern 2: Technical Community

r/MCP (32%) + r/ClaudeCode (16%) = 48% from technical communities

Pattern 3: Global Impact

>50% non-major markets = Agricultural AI solving global problems

Pattern 4: Production Ready

1,106 clones + 821 npm downloads/week = Real usage, not just interest

🌟 WHAT WE'LL DO WITH YOUR STORIES:

  1. Prioritize features based on real needs
    
  2. Build example projects from your use cases
    
  3. Connect developers with similar interests
    
  4. Feature top projects in our documentation
    
  5. Create "Developer Spotlight"series
    

🌟TIMELINE:

- Campaign: April 4 - April 18 (2 weeks)

- Pro Access: Delivered within 48 hours

- Featured Cases: Weekly highlights

- Final Report: Shared with community

🔗 RESOURCES:

- GitHub: https://github.com/QWarranto/leafengines-claude-mcp

- npm (MCP Server): https://www.npmjs.com/package/@ancientwhispers54/leafengines-mcp-server

- Claude Skill: Agricultural Intelligence

🌟 WHY PARTICIPATE?

For You:

- 30 days Pro FREE (unlimited API, priority support, advanced features)

- Community recognition

- Influence product roadmap

- Technical support

For Everyone:

- Better tools (your feedback shapes development)

- Stronger community (connect with fellow developers)

- More documentation (your use cases become examples)

- Global impact (agricultural AI helps feed the world)

🌟 LET'S TURN METRICS INTO STORIES!

1,106 clones. 98 developers. 12,439 community supporters.

Now tell us: What are YOU building?

🌱 LeafEnginesChallenge

reddit.com
u/Longgrain54 — 7 hours ago
▲ 4 r/mcp

I built an MCP proxy that compresses tool schemas by 77%. Looking for testers to break it.

The "MCP eats my context window" complaint is real. I measured it: 57 tools across 4 servers = 7,528 tokens before the agent does anything.

So I built slim-mcp — a proxy that sits between your agent and your MCP servers. It replaces verbose JSON Schema with TypeScript-style parameter signatures in the description. Think caveman speak for tool definitions:

Before: {"type": "string", "description": "The owner of the repository"}

After: owner:s!

Result: 7,528 tokens → 1,750. 77% reduction.

We tested accuracy with 120 API calls against Claude Sonnet — zero failures at every compression level. But that's our tools and our prompts. I want to know if it holds with yours.

Looking for:

  • People running 5+ MCP servers (GitHub, Notion, Playwright, etc.) — the more tools, the better the test.

  • Cursor / Cline users who don't have Claude Code's built-in Tool Search

  • Anyone willing to try extreme mode and report if tool calls break

Default is standard (19% reduction, zero risk). The aggressive modes are opt-in.

--> npm install -g slim-mcp

Everything else — config, benchmarks, how it works — is in the README.

npm: npmjs.com/package/slim-mcp

GitHub: github.com/Joncik91/slim-mcp

The context window problem is an engineering problem, not a protocol flaw. MCP doesn't need to die — it needs better tooling.

reddit.com
u/Blade999666 — 20 hours ago
▲ 3 r/micro_saas+4 crossposts

Do u think unified mcp have a demand?

I realize that not many developers have worked w MCP before, and now with tonns of mcp servers for each service it can be confusing to configure all of em at the same time.

So i was wondering is it even good idea to build like unifier of all mcp servers through oauths or something like that?

reddit.com
u/Then-Coconut-3614 — 18 hours ago
Week