u/JanethL

I’m curious how others are approaching MCP + Skills in Agentic AI development.

In a recent DevTalk, we walked through an agent architecture where MCP is used primarily as a transport layer, and platform/domain expertise is packaged as “skills” not as large system prompts or static files baked into the agent, but as injectable, on‑demand guidance delivered via MCP.

At a high level, the setup looked like this:

  • Domain docs, best practices, and patterns are collected into a skills library
  • The agent is given access to a minimal set of tools to avoid context overload
  • The agent pulls only the guidance it needs at runtime via a dedicated get_syntax_help() tool (progressive disclosure)

​

mcp.tool() 
def get_syntax_help(topic: str = "index") -> str:     
"""     
IMPORTANT: Call this BEFORE writing analytics or ML SQL. 

    Recommended call order:       
      1) get_syntax_help(topic="guidelines")         
        # native-functions-first rules + best practices 

      2) get_syntax_help(topic="index")         
       # discover available topics / workflows 

      3) get_syntax_help(topic="<specific-topic>")          
       # pull exact syntax / pattern    
"""
  • The server explicitly instructs the agent to check platform guidelines before generating analytics or ML SQL
  • No filesystem coupling, no framework lock‑in

What I'm trying to verify is if:

  • others are combining MCP + Skills this way?
  • If you took a different approach, why?

GitHub Repo: tdsql MCP Server: https://github.com/ksturgeon-td/tdsql-mcp/blob/main/README.md

Would love to hear what patterns devs are actually using.

I wrote this up in more detail with examples and includes the recording of the live demo if useful: https://janethl.medium.com/building-smarter-ai-agents-for-data-science-workflows-at-scale-174fd51bf66b

u/JanethL — 13 days ago

I’m curious how others are approaching this in real systems.

In a recent DevTalk, we walked through an architecture where MCP is used as a transport layer where platform expertise is repackaged as “Skills" not as static files, but as injectable on‑demand guidance delivered via MCP.

Something like this:

  • Platform documentation, best practices, and SQL patterns are collected into a skills library (as .md files)
  • The agent is given access to a minimal set of tools to avoid context overload
  • The agent pulls only the guidance it needs at runtime via a dedicated get_syntax_help() tool (progressive disclosure)

​

mcp.tool() 
def get_syntax_help(topic: str = "index") -> str:     
"""     
IMPORTANT: Call this BEFORE writing analytics or ML SQL. 

    Recommended call order:       
      1) get_syntax_help(topic="guidelines")         
        # native-functions-first rules + best practices 

      2) get_syntax_help(topic="index")         
       # discover available topics / workflows 

      3) get_syntax_help(topic="<specific-topic>")          
       # pull exact syntax / pattern    
"""
  • The server explicitly instructs the agent to check platform guidelines before generating analytics or ML SQL
  • No filesystem coupling, no framework lock‑in

What I'm trying to verify is if:

  • others are combining MCP + Skills this way?
  • If you took a different approach, why?

GitHub Repo: tdsql MCP Server: https://github.com/ksturgeon-td/tdsql-mcp/blob/main/README.md

Would love to hear what patterns devs are actually using.

I wrote this up in more detail with examples and includes the recording of the live demo if useful: https://janethl.medium.com/building-smarter-ai-agents-for-data-science-workflows-at-scale-174fd51bf66b

reddit.com
u/JanethL — 13 days ago

Forcing agents to use the right tools  — MCP + Skills + LangGraph demo

If you’ve been building agents with LangChain/LangGraph, you’ve probably experienced this:

Your agent works… but the way it executes is kinda messy

  • generates inefficient SQL
  • pulls data out of your EDW or Lakehouse instead of pushing compute down
  • ignores native analytics / ML functions
  • chains together workflows that don’t scale

What we explored

We put together a demo using an MCP (Model Context Protocol) server + Skills + LangChain-style orchestration to guide agents how to operate inside a data platform.

Instead of letting the agent figure everything out, we:

  • inject skills + platform-aware context
  • constrain tool usage (no arbitrary SQL generation)
  • guide it toward native operators (ML, stats, text, vector)
  • let it chain those into full workflows

What this enables

Agents that can:

  • pick the right analytic function for the task
  • recognize when SQL isn’t enough
  • use in-database ML / analytics instead of client-side code
  • build end-to-end pipelines that are actually deployable

Why this matters (imo)

A lot of agent frameworks  make it easy to compose workflows, but not necessarily to optimize execution for a specific system.

So you end up with:

“Correct answer, wrong execution strategy”

This approach is more about:

“Constrain the agent so it behaves like a good engineer for that platform”

Demo + repo

Repo:
https://github.com/ksturgeon-td/tdsql-mcp/blob/main/README.md

Free environment if you want to try it:
https://www.teradata.com/getting-started/demos/clearscape-analytics

u/JanethL — 15 days ago