


AI Assistant v1 - My local DevOps Assistant focused on Kubernetes!
While studying Python, FastAPI, LLMs, and AI, I decided to stop just watching tutorials and start building.
I created AI Assistant v1 - a chat that answers technical DevOps and Kubernetes questions in a clear, fast, and practical way.
Everything runs 100% locally on my machine with no paid APIs.
What it does now:
- Explains Kubernetes concepts (Pods, Deployments, Services, Ingress, etc.)
- Provides real-world examples and useful kubectl commands
- Includes best practices for cloud-native environments
- Answers like an experienced Senior DevOps Engineer
Tech stack I used (built from scratch):
- Frontend: Streamlit (clean chat interface)
- Backend: Python - FastAPI (async /analyze endpoint)
- LLM: Ollama + Llama 3.2:3b (using OpenAI compatible client)
- Prompt Engineering: Optimized system prompt
I tested it with questions like “What is a Pod in Kubernetes?” and the improvement was huge.
It’s still an MVP with room for improvement, but seeing it work locally really shows me how powerful local LLMs can be for developers and DevOps engineers and also to create a big products.
I’ll upload the full project to GitHub soon with a proper README, requirements, and setup instructions.
below are some screenshots.
If you want the code once it’s on GitHub, just let me know.
Is there some Discord Server? I would like to trade ideas.