
Prism - give a local LLM control over its own interface
Hi everyone! I built Prism, a self-hosted AI workspace that runs entirely on your hardware — no cloud, no API keys.
The core idea: give the local model actual control over its environment, not just a chat box. Connected to Ollama, the agent can:
- Write & run code, execute shell commands, schedule cron tasks
- Create interactive dashboard widgets (HTML/JS) pinned to a drag-and-drop UI
- Search the web (via self-hosted SearXNG), fetch URLs, control a browser
- Build and query its own local knowledge base (RAG with pgvector)
- Define and register new tools at runtime — extending itself without touching the code
- Store secrets, send notifications, maintain persistent memory per session
Stack: Go backend, vanilla JS frontend, PostgreSQL + pgvector, SearXNG. Runs with a single docker compose up.
Still a work in progress, but it's usable. Feedback welcome : github.com/blazux/prism
Upvote1Downvote0Go to commentsShare