Built a local LM Studio stats panel that shows what my AI stack is actually doing
I’ve been building out a local LM Studio dashboard that gives me a much clearer view of what my stack is actually doing across MCP servers, tools, failures, token flow, and completed actions.
It tracks things like:
- configured MCP servers
- successful vs failed calls
- token usage through LM Studio
- estimated cost avoided locally
- repeated failure patterns
- server health rollups
- action history for research, image generation, WordPress, email, terminal tasks, uploads, and more
One of the most useful parts is that it does not just show stats. It also highlights what needs attention, what is improving, which tools are noisy, and which repeated issues should be fixed first.
A few things I’m aiming for with it:
- make local AI workflows easier to debug
- see which MCP servers are actually reliable
- track real work completed, not just model chats
- understand where tokens are going
- create a feedback loop so the stack can improve over time
I’m sharing a video of the panel here because I think local AI needs better visibility like this, especially once you start stacking LM Studio with MCP tools, automation, memory, WordPress, browser actions, and custom workflows.
Would love feedback on it.
What would you want to see in a dashboard like this?